Bilgisayar Mühendisliği Bölümü Tezleri
Permanent URI for this collectionhttps://hdl.handle.net/20.500.12416/58
Browse
Browsing Bilgisayar Mühendisliği Bölümü Tezleri by Title
Now showing 1 - 20 of 28
- Results Per Page
- Sort Options
Master Thesis 3D reconstruction of a scene using stereo images(2008) Taşel, Faris SerdarTwo-dimensional photographs do not have depth-information. One solution to determine the location of an object in three-dimensional environment is to use more than one photograph as exposed by the nature. Extracting the depth information using stereo images is purposed in this thesis. The thesis analyzes the steps and encountered problems in three-dimensional reconstruction process, explains the solutions exposed with the aid of epipolar geometry using some of the feature-based matching techniques. Stereo images which are taken from two calibrated cameras viewing the same scene are used to obtain estimated three-dimensional data. Pinhole camera model, epipolar geometry and its recovery are discussed; common stereo triangulation methods are explained in the chapters of the thesis. Besides, feature extraction and matching topics which are used for the reconstruction process are examined. Some of the methods used in the thesis are presented by algorithmic solutions and mathematical notations. Significant advantages and disadvantages of the methods are briefly discussed and encountered problems are tried to be challenged by fundamental approaches.Master Thesis A computational analysis of a language structure in natural language text processing(2005) Eş, SinanText categorization or classification is a general task of classifying un-organized natural language texts according to specific subject matter or category. Electronic mail (e-mail) filtering is a binary text classification problem which the user emails can be classified as legitimate (non-spam) or un-wanted mail (spam). In this study, we tried to find a filtering solution that is able to automatically classify emails into spam and legitimate categories. In order to automatically and efficiently classify emails as spam or legitimate we took advantage of some Machine Learning methods and some novel ideas from Information RetrievalMaster Thesis An approach to improve the time complexity of dynamic provable data possession(Çankaya Üniversitesi, 2016) Hawi, Mohammed KadhimIn this thesis, we aim to take some actions for alleviating the fears when the data storage over outsourcing, and guarantee the integrity of the files in cloud computing. In this study, we have suggested some ideas to improve FlexDPDP scheme [13]. Particularly, proposed scheme successfully reduces the time complexity for verifying operations between the client and the server. The proposed scheme is a fully dynamic model. We involved some parameters to ensure the integrity of the metadata. In spite of the fact that auxiliary storage expenditure by Client-side (the client stores approximately 0.025% size of the raw file). The remarkable enhancement in this proposed scheme is reducing the complexity. The complexity of the communications and the computations decreased to O(1) in both Client-side and Server-side during the dynamically update (insertion, modification and deletion operations) and challenge operations.Master Thesis An energy-efficient clustering based communication protocol with dividing the overall network area for wireless sensor networks(2014) Khalaf, Abdulrahman ZaidanIn this thesis, the energy efficient and connectivity problem in wireless sensor networks (WSNs) is presented. There are more difference between energy levels of near nodes and far nodes of cluster heads. This problem compensated by dividing the entire network (sensor field) into equal area and applies different clustering policies to each section. The results compared with results of LEACH (Low Energy Adaptive Clustering Hierarchy). The performance of proposal system overcomed the previous studies. Also this protocol guaranted transmitting data and transmission in high traffic networks to reduce energy consumption and packet failureMaster Thesis Attention Mekanizmaları ve Hibrit ViT-ResNet Mimarisi ile Gemi Görüntülerinin Çok Sınıflı Sınıflandırılması(2025) Ergün, Berkay; Arslan, SerdarBu tezde, gemi görüntülerinin çok sınıflı sınıflandırılması için Vision Transformer (ViT) ve ResNetRS50 tabanlı hibrit bir model geliştirilmiştir. ViT yüksek seviyeli anlamsal bilgileri, ResNetRS50 ise düşük ve orta seviyeli mekânsal özellikleri çıkarmakta; bu iki yapı, dikkat (attention) mekanizmaları ve Gated Fusion katmanı ile birleştirilmektedir. Eğitim sürecinde MixUp ve CutMix veri artırma yöntemleri, Focal Loss ile bilgi aktarımı (distillation) kaybı, OneCycleLR zamanlayıcı, otomatik karma hassasiyet (AMP) ve model ağırlıklarının üssel hareketli ortalaması (EMA) kullanılmıştır. Sekiz gemi sınıfından oluşan veri kümesi üzerinde yapılan deneyler, önerilen mimarinin hem doğruluk hem F1 skoru açısından tek başlı CNN veya ViT modellerinden daha yüksek performans gösterdiğini ortaya koymuştur. Sonuçlar, hibrit mimariler ve dikkat tabanlı füzyon stratejilerinin gemi sınıflandırma problemlerinde etkin bir çözüm sunduğunu göstermektedir.Master Thesis Classification of diabetic retinopathy using pre-trained deep learning models(2019) Al-Kamachy, Inas Mudheher Raghib KafıDiabetic Retinopathy (DR) is considered to be the first factor that leads to blindness. If it is not detected early, many people around the world would suffer from the diabetic disease that may lead to DR in their eyes. Any delay in regular monitoring and screening by ophthalmologists may cause rapid and dangerous progress of this disease which finally leads to human vision loss. The imbalance between the numbers of doctors required to monitor this disease and the number of patients around the world increasing year by year shows a major problem leading to poor regular monitoring and loss vision in many cases which could have been detected had there been good treatment in the earlier stages of DR. In order to solve this problem, serious aid was needed for a computer aid diagnosis (CAD). Deep learning pre-trained models are state-of-art in image recognition and image detection with good performance. In this research, we used image pre-processing and we built several convolution neural network models from scratch and fine-tuned five pre-trained deep learning models which used ImageNet as the dataset for medical images of diabetic retinopathy in order to classify diabetic retinopathy into five classes. After that, we selected the model that showed good performance to build a diabetic retinopathy web application using Flask as a framework web service. We used the KAGGLE kernel website with Jupyter as a notebook as well as Flask to build our web application. The final result of the AUC was 0.68 using InceptionResNetV2.Conference Object Citation - Scopus: 5Deep Learning Methods With Pre-Trained Word Embeddings and Pre-Trained Transformers for Extreme Multi-Label Text Classification(Institute of Electrical and Electronics Engineers Inc., 2021) Erciyes, N.E.; Görür, A.K.In recent years, there has been a considerable increase in textual documents online. This increase requires the creation of highly improved machine learning methods to classify text in many different domains. The effectiveness of these machine learning methods depends on the model capacity to understand the complex nature of the unstructured data and the relations of features that exist. Many different machine learning methods were proposed for a long time to solve text classification problems, such as SVM, kNN, and Rocchio classification. These shallow learning methods have achieved doubtless success in many different domains. For big and unstructured data like text, deep learning methods which can learn representations and features from the input data wtihout using any feature extraction methods have shown to be one of the major solutions. In this study, we explore the accuracy of recent recommended deep learning methods for multi-label text classification starting with simple RNN, CNN models to pretrained transformer models. We evaluated these methods' performances by computing multi-label evaluation metrics and compared the results with the previous studies. © 2021 IEEEMaster Thesis Derin Öğrenme ve Çok Boyutlu İndeksleme Kullanılarak İçerik Tabanlı Görüntü Alma(2024) Uzel, Ömer; Arslan, SerdarSon yıllarda yaşanan teknolojik gelişmeler ile donanım ve yazılım maliyetlerindeki düşüş, görsel arama uygulamalarını hem popüler hem de vazgeçilmez bir hale getirdi. Dolayısıyla, görsel sorgular aracılığıyla görüntülerin geniş veri tabanlarından hızlı ve hassas bir şekilde alınması kritik bir görev haline geldi. Video karesi düzeyinde veri tabanı aramaları yürütülen sistemlerle karşılaştırıldığında, arama performansını önemli ölçüde artıran yeni bir system sunuyoruz. Önceden eğitilmiş bir Evrişimli Sinir Ağı (CNN) modelinden yararlanarak, verimli indeksleme için düşük seviyeli özellikleri çıkarmak ve depolamak amacıyla denetimsiz görüntü alma süreçlerini kullanıyoruz. Hızlı ve etkili erişimi kolaylaştırmak için, Bakış Noktası Ağacı (VP Tree) olarak bilinen düşük seviyeli özelliklerden yararlanan bir indeksleme yapısı uyguluyoruz. Bu özelliklerden faydalanabilmek için, onları daha düşük boyutlu bir alanda temsil edecek boyut küçültme tekniklerini kullanıyoruz. Karşılaştırmalı görüntü veri kümesi üzerinde gerçekleştirilen deneylerimiz, bu yaklaşımın, K-En Yakın Komşu (KNN) araması olarak bilinen bir arama yöntemiyle karşılaştırıldığında daha hızlı ve doğru erişime yol açtığını göstermektedir. Ayrıca, önerilen tekniği iki gerçek video veri kümesini kullanarak KNN'e karşı değerlendiriyoruz ve bu teknik, sürekli olarak KNN'den daha iyi performans gösteriyor.Master Thesis Design and analysis of native XML databases in three-tier architectures(2004) Ergen, Mehmet TunçXML is rapidly emerging as a standard for exchanging business data on the World Wide Web. From management systems to e-business application providers to pure development tools, XML has gone from newly underground technology to integrated component standard. It is used as the file format of choice for Web development, document interchange, and date interchange, and presents a new world of opportunities and challenges to programmers. It is predicted that by at the end of 2004, more than 75% of e-business applications will include XML regardless of which language the application has been written in. As more and more applications starts using XML there wilt be a need to efficiently handle the XML data at the back-end. The need to efficiently store and process XML documents has created the new XML supported technologies and tools. One of these tools is the Native XML Databases. It is based on document-in, document-out architecture with capabilities for storage, retrieval, querying and updating the documents. While Native XML Databases are an important new technology, they should not be used without careful analysis and consideration. In this thesis Native XML Databases are investigated and analyzed in a 3-tier architecture to gain and ensure several advantages that three-tier systems offer to application developers and information technology industryMaster Thesis Determining rheumatoid arthritis and osteoarthritis diseases with plain hand x-rays using convolutional neural network(2019) Üreten, KemalRecent advances in computer technology have facilitated the acquisition of high-resolution images and processing of images. Convolutional neural network (CNN) is a branch of deep learning. CNN was first introduced in 1995 by LeCun, and in 2012, AlexNet won the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC), after which there was rapid growth in deep learning applications. There are many successful studies using CNN especially in dermatology, pathology, radiology and ophthalmology. CNN highly successful in feature extraction and classification and requires less pre-processing. But in the CNN method, overfitting is an important problem that needs to be addressed and requires a large data set for training. If there is not enough data for CNN training from scratch, previously trained CNN network from the natural image data set are used for transfer learning. Transfer learning is the use of a pre-trained model for a new problem. In recent years, there have been a few studies showing that CNN models trained with natural images have achieved successful results in the medical field. Rheumatoid arthritis (RA) and hand osteoarthritis (OA) are two different diseases that cause pain, swelling, tenderness, loss of function in hand joints. In these diseases, affected joints and radiologic lesions show some differences. Treatment of both diseases is also different. Conventional plain hand X-Rays (CR) are often used to diagnosis, differential diagnosis of RA and OA. The aim of this study is to develop a software that will help physicians for differential diagnosis of RA and OA from CR. To the best our knowledge, this is the first study to distinguish between normal, hand OA and RA using plain hand radiographs. The efficiency of the created models was evaluated by using performance metrics such as accuracy, sensitivity, specificity and precision. In this study, pre-trained GoogLeNet, ResNet50 and VGG16 networks were used, transfer learning was applied. Successful results were obtained from all three pre-trained networks. In this study, data augmentation, droupout, fine tuning, learning rate decay was applied to prevent overfitting. And during the training, no signs of overfitting were observed in the training chart.Master Thesis Evaluation of terrain rendering algorithms(2005) İnam, Eminerrain rendering plays an important role in outdoor virtual reality applications, games, Geographic Information System (GIS), military mission planning's and flight simulations, etc. Many of these applications require real-time dynamic interaction from end users and thus are required to rapidly process terrain data to adapt to user input. Typical height fields consist of a large number of polygons, so that even most high performance graphics computers have great difficulties to display even moderately sized height fields at interactive frame rates. The common solution is to reduce the complexity of the scene while maintaining a high image quality. This thesis is an evaluation of three real-time continuous terrain levels of detail algorithms described in the papers ROAMing Terrain: Real-time Optimally Adapting Meshes by Duchaineau, Real-Time Generation of Continuous Levels of Detail for Height Fields by Röttger and Fast Terrain Rendering Using Geometrical MipMapping by Willem H. de Boer. The evaluation and comparison of the algorithms is based on the trade- off of polygon count to terrain accuracy over separate test data sets. The main aim of this thesis is research on terrain rendering algorithms that is generate high quality image in real-time with using height data.Master Thesis Finding the ethnical identity of human face(2012) Yenice, MerveIn this thesis, how to find a human being’s ethnical identity from his/her face is analyzed. Parts of the face like eyes, nose, mouth, skin colour are used for defining the face. In addition to this, some programs like C# and Luxand are also used in correctly defining and calculating the facial parts. This calculation is very important and necessary in fractionating the human face and finding the dimensions of members of the face, because it gives the main idea about the shape, length and colour of face. The most important issues in determining and finding the ethnical identity of human face are shape, length and skin colour of the face. After finding these items the ethnical identity of a human can easily be found. The evidence found after working in this thesis is shown that the thesis has reached its aim willingly.Master Thesis Integrating computer vision with a robot arm system(2014) Yosif, Zead MohammedDuring last decades, robotic system has been employed in different fields, such as, industrial, civil, military, medical, and many other applications. Vision system is integrated with robot systems to enhance the controlling performance of the robot system. A great deal of features can be computed using the information have been gotten from vision sensors (camera). The extracted information from vision system can be used in the feedback to have the ability to control the robot armtor motion, but the operation of extracting this information from vision system is time consuming. This thesis addressed the problem of following (tracking) and grasping of moving target (object) with limited velocity in real time by employing the technology of Eyein- Hand, whereas a camera attached (mounted) to the robot arm end effector. This done by using a predictor (Kalman filter) that estimates the positions of the target in the future, an algorithm was designed to track an object move in different trajectories, within the camera field of view. The Kalman filter uses the measured position of the target as well as previous state estimates to fix the location of thetarget object at the next time step, in other word, the Kalman filter is applied to keep observing the object till grasp it. The employing of vision system information in the feedback control of the robot systems have been the major research in robotics and Mechatronic systems. The utilizing from this information has been proposed to handle stability and reliability issues in vision-based control system.Conference Object Citation - Scopus: 5Investigating End User Errors in Oil and Gas Critical Control Systems(Association for Computing Machinery, 2020) Alrawi, L.N.; Pusatli, T.System availability and efficiency are critical in the petroleum sector as any fault affecting those systems may negatively impact operations resources, such as money, human resources and time. Therefore, it has become important to investigate the reasons for such errors. In this study, human error has been targeted since a number of these errors is projected to increase in the sector. The factors that affect end user behavior are investigated in addition to an evaluation of the relation between system availability and human behavior. An investigation has been performed following the descriptive methodology in order to gain insights into human error factors. Questionnaires related to software/hardware errors and errors due to the end user were collected from 81 site workers. The findings indicate a potential relation between end user behavior and system availability. Training, experience, education, work shifts, system interface, usage of memory sticks and I/O devices were identified as factors affecting end user behavior, hence system availability and efficiency. © 2020 ACM.Master Thesis Machine learning in artificial intelligence(2006) Ercan, TarduIn today’s world, learning is a process of computers as well as human being. “Learnable” systems and computers will become more important in following years and affect our lives in many ways. In this thesis, a survey has been carried out in the field of artificial intelligence, machine learning and especially on decision tree learning algorithms. Some of the decision tree learning algorithms was used to learn rules which are extracted from a dataset. The dataset which consists of water consumption of Ankara for one year and meteorological data of Ankara was used. The results indicate that which learning method is more efficient and have better performance.Master Thesis Makine Öğrenimi ile Siyanoakrilat Yapıştırıcı Ameliyatı Sonrası Varis Tekrarının Tahminine Yönelik Model Geliştirilmesi(2025) Ahmed, Ruaa Saad Ahmed; Tokdemir, GülVaris hastalığı, yaygın görülen bir vasküler bozukluk olup, sıklıkla siyanoakrilat yapıştırıcı tedavisi gibi minimal invaziv yöntemlerle tedavi edilmektedir. Ancak, nüks önemli bir sorun olmaya devam etmekte ve tedavi sonrası prognozun iyileştirilmesi için öngörücü modellerin geliştirilmesini zorunlu kılmaktadır. Bu çalışma, siyanoakrilat yapıştırıcı tedavisini takiben varis hastalığının nüksünü tahmin etmek amacıyla makine öğrenmesi tabanlı bir öngörü modeli oluşturmayı hedeflemektedir. Bu kapsamda, on yıllık bir dönemi kapsayan ve 430 hastaya ait ultrason raporları, kan test sonuçları ve kronik hastalık göstergelerini içeren bir veri seti bir tıp merkezinden temin edilmiştir. Veri ön işleme sürecinde eksik veriler tamamlanmış, SMOTE ve SMOTEENN yöntemleri kullanılarak dengesiz veri sınıfları dengelenmiştir. Öznitelik seçimi için RFE yöntemi uygulanmış ve karar ağaçları tabanlı önem sıralaması hesaplanmıştır. Çalışmada lojistik regresyon, karar ağaçları, destek vektör makineleri, Random Forest, XGBoost ve CatBoost gibi farklı sınıflandırıcılar eğitilmiş ve test edilmiştir. Eğitim ve test aşamaları için veriler %80 eğitim, %20 test olarak bölünmüş ve 5 katlı çapraz doğrulama yöntemi kullanılmıştır. Model performansı doğruluk (accuracy), kesinlik (precision), duyarlılık (recall), F1-skoru ve ROC-AUC gibi değerlendirme metrikleri ile ölçülmüştür. Elde edilen vii sonuçlar, CatBoost ve XGBoost yöntemlerinin diğer sınıflandırıcılara kıyasla çok daha yüksek performans gösterdiğini ortaya koymaktadır. Venöz ölçümler, kronik hastalık göstergeleri ve belirli kan test parametreleri, klinik karar sürecini iyileştirebilecek en önemli öngörücü değişkenler arasında yer almaktadır. Geliştirilen model, yüksek riskli hastaların belirlenmesine yardımcı olarak erken müdahale stratejilerinin geliştirilmesini sağlayacaktır. Ancak, bu çalışmanın en önemli sınırlamalarından biri, yalnızca tek bir kuruma ait hasta verilerine dayanmasıdır. Gelecekteki çalışmalar, modelin daha geniş ve çeşitli veri kümeleri üzerinde doğrulanmasını sağlamalı ve tahmin doğruluğunu daha da iyileştirmek için derin öğrenme teknikleri ve çok modlu veri kaynaklarının entegrasyonunu araştırmalıdır. Bu araştırma, makine öğrenmesinin vasküler hastalık yönetimindeki potansiyelini vurgulamakta ve klinik uygulamalarda veri odaklı ilerlemelerin önünü açmaktadır.Master Thesis Measuring political polarization using big data: The case of Turkish elections(2020) Sürücü, SelimBüyük veri, birçok öğrenme görevinde en son makine öğrenimi ve derin öğrenme başarılarının arkasındaki itici güç olmuştur. Sosyal medya verileri, büyük bir veri kaynağı olarak, sosyal hareketleri, politik ve sosyal değişiklikleri anlamak için birçok sosyal çalışmada kullanılmıştır. Bu çalışmada, siyasetin son dönem endişelerinden biri olan siyasi kutuplaşmayı ölçmek için sosyal medya (Twitter) verilerini analiz edeceğiz. Bu çalışmada, Türkiye'de 2019 seçimlerinde toplanan Twitter verilerinden yararlanıl-mıştır; siyasi kutuplaşmayı ölçmek için yeni ölçütler geliştirilmiştir. Sosyal ağdaki siyasi grupları analiz ettik ve ardından seçim döneminde zaman içindeki siyasi kutuplaşmayı ölçtük. Topluluk algılama algoritmalarını uygulayarak, önce toplulukları kullanıcılar arasındaki etkileşimlere göre belirleriz.Ardından, genel bir seçim sürecinde büyük verileri kullanarak siyasi kutuplaşmanın varlığını ve büyümesini başarılı bir şekilde göstermek için kullanıcı grupları (topluluklar) arasındaki etkileşimi ölçüyoruz. Bildiğimiz kadarıyla bu, siyasi bir seçim sürecinde ilk geniş ölçekli siyasi kutuplaşmaya ilişkin veri çalışmasıdır.Master Thesis Model based human face detection using skin color segmentation(2002) Özbay, EylemModel Based Human Face Detection Using Skin Color Segmentation Özbay, Eylem Ms, Department of Computer Engineering Supervisor: Dr. Reza Hassanpour January 2005, 85 pages For identification of the people easiest way using the faces. However, it requires determining the location of the faces in the images. Face Identification systems are generally preceded by face segmentation systems. The main goal of thesis is locating the human faces and segmentation regions belonging to them using skin color segmentation methods and facial features such as nose, eyes, mouth etc. The segmentation results may be used as input to other related systemsMaster Thesis Multi-label and single-label text classification using standard machine learning algorithms and pre-trained bert transformer(2023) Alfigi, HudaDoğal dil işleme (DDİ) araştırmaları, dijital belgelerin artan kullanılabilirliği ve bunlara çeşitli şekillerde erişme ihtiyacı nedeniyle son zamanlarda büyük ilgi görmüştür. Dijital metin verilerindeki patlama, çeşitli metin işleme ve sınıflandırma tekniklerinin geliştirilmesi ihtiyacını ortaya koymaktadır. DDİ'deki en temel ve hayati zorluk metin sınıflandırmasıdır. Bu amaçla, belgeleri ve metinleri içeriklerine göre önceden belirlenmiş kategorilere ayırmak için önerilmiştir ve o zamandan beri makine öğrenimini uygulamanın en popüler yöntemlerinden biri haline gelmiştir. Makine öğrenimi (MÖ) yaklaşımı, genel bir tümevarım yaklaşımının bir dizi sınıflandırılmış metin ve ilgi sınıflarının özelliklerini kullanarak özel olarak sınıflandırılmış bir metin oluşturmayı öğrendiği bir yöntemdir. Ayrıca, ilgili bilgilerin keşfedilmesi, fazla bilgi yükünü azaltırken bilgi alma verimliliğini artırmaya yardımcı olabilir. Geleneksel modeller, standart makine öğrenimi algoritmalarını kullanarak sınıflandırmadan önce iyi örnek nitelikleri elde etmek için genellikle yapay yöntemler gerektirir. Bu nedenle, özellik çıkarma yöntemin etkinliğini önemli ölçüde kısıtlar. Öte yandan, derin öğrenme, özellik temsillerinin çıktılara aktarılmasına yardımcı olan bir dizi doğrusal olmayan dönüşüm gerçekleştirerek özellik çıkarma işlemini model oluşturma yaklaşımına dahil ettiği için daha fazla ilgi gören tipik modellerden farklıdır. Ayrıca, derin öğrenme algoritmaları, uzmanların kuralları ve öznitelikleri tanımlama ihtiyacını ortadan kaldırır, bunun yerine metinler için otomatik olarak üst düzey anlamsal temsiller sağlar. Bu nedenle, bu çalışmalarda, BERT gibi önceden eğitilmiş modellerden elde edilen bağlamsal gömme yeteneklerini keşfediyoruz ve küçük bir İngilizce haber veri kümesinde uygulanacak bazı geleneksel makine öğrenimi yöntemlerine ek olarak, büyük bir İngilizce haber veri kümesindeki metin belgelerinin çok etiketli sınıflandırmasından yararlanıyoruz. Son olarak, BERT'in bir başka versiyonu olan Arapça BERT, Arapça bir otel incelemesi veri kümesinden çıkarılan yönlere yönelik duygu eğlimini araştırmaktadır.Master Thesis Multifunction robot controlled by computer vision system(2014) Mustafa, Mohammed SulaimanIn this thesis, we try to come up and build a robot platform with multifunction capabilities, easy to add, modify and delete those functions without redesigning, by using easy use technology that can create a suitable efficient platform. The process of building platform is by using figures, tables and programming code to make this thesis capable to apply and implement in real world, showing obstacles and challenges that lead to the key of success until it reaches the final goal. This thesis requires only basic level in electronic and computer programming because we are using a simplified way for building robot. The multifunction platform is a unique idea and opens new space to experimenters to get benefits from this opinions or ideas to use these functions in raw state, with no need to study hardware and software material of robot. The final robot form is shown in the last pages of this thesis as appendix

