Yazar "Ozturk, Emir" seçeneğine göre listele
Listeleniyor 1 - 8 / 8
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A Character Based Steganography Using Masked Language Modeling(IEEE-Inst Electrical Electronics Engineers Inc, 2024) Ozturk, Emir; Mesut, Andac Sahin; Fidan, Ozlem AydinIn this study, a steganography method based on BERT transformer model is proposed for hiding text data in cover text. The aim is to hide information by replacing specific words within the text using BERT's masked language modeling (MLM) feature. In this study, two models, fine-tuned for English and Turkish, are utilized to perform steganography on texts belonging to these languages. Furthermore, the proposed method can work with any transformer model that supports masked language modeling. While traditionally the hidden information in text is often limited, the proposed method allows for a significant amount of data to be hidden in the text without distorting its meaning. In this study, the proposed method is tested by hiding stego texts of varying lengths in cover text of different lengths in two different language scenarios. The test results are analyzed in terms of perplexity, KL divergence and semantic similarity. Upon examining the results, the proposed method has achieved the best results compared to other methods found in the literature, with KL divergence of 7.93 and semantic similarity of 0.99. It can be observed that the proposed method has low detectability and demonstrates success in the data hiding process.Öğe Efficient methods to generate cryptographically significant binary diffusion layers(Inst Engineering Technology-Iet, 2017) Akleylek, Sedat; Rijmen, Vincent; Sakalli, Muharrem Tolga; Ozturk, EmirIn this study, the authors propose new methods using a divide-and-conquer strategy to generate n x n binary matrices ( for composite n) with a high/maximum branch number and the same Hamming weight in each row and column. They introduce new types of binary matrices: namely, (BHwC)(t,m) and (BCwC)(q,m) types, which are a combination of Hadamard and circulant matrices, and the recursive use of circulant matrices, respectively. With the help of these hybrid structures, the search space to generate a binary matrix with a high/maximum branch number is drastically reduced. By using the proposed methods, they focus on generating 12 x 12, 16 x 16 and 32 x 32 binary matrices with a maximum or maximum achievable branch number and the lowest implementation costs (to the best of their knowledge) to be used in block ciphers. Then, they discuss the implementation properties of binary matrices generated and present experimental results for binary matrices in these sizes. Finally, they apply the proposed methods to larger sizes, i.e. 48 x 48, 64 x 64 and 80 x 80 binary matrices having some applications in secure multi-party computation and fully homomorphic encryption.Öğe File Size Estimation in JPEG XR Standard Using Machine Learning(IEEE, 2016) Ozturk, Emir; Mesut, AltanAlthough JPEG XR was developed later than JPEG2000, it has not the ability to compress to a certain size unlike JPEG2000. In this study, a machine learning algorithm is proposed in order to bring this feature to JPEG XR method. The results show that, the file size estimation algorithm gives accurate results under the circumstances of giving specific range to compression ratio.Öğe Generating binary diffusion layers with maximum/high branch numbers and low search complexity(Wiley-Hindawi, 2016) Akleylek, Sedat; Sakalli, Muharrem Tolga; Ozturk, Emir; Mesut, Andac Sahin; Tuncay, GokhanIn this paper, we propose a new method to generate n x n binary matrices (for n = k . 2(t) where k and t are positive integers) with a maximum/high of branch numbers and a minimum number of fixed points by using 2(t) x 2(t) Hadamard (almost) maximum distance separable matrices and k x k cyclic binary matrix groups. By using the proposed method, we generate n x n (for n = 6, 8, 12, 16, and 32) binary matrices with a maximum of branch numbers, which are efficient in software implementations. The proposed method is also applicable with m x m circulant matrices to generate n x n (for n = k . m) binary matrices with a maximum/high of branch numbers. For this case, some examples for 16 x 16, 48 x 48, and 64 x 64 binary matrices with branch numbers of 8, 15, and 18, respectively, are presented. Copyright (C) 2016 John Wiley & Sons, Ltd.Öğe A method to improve full-text search performance of MongoDB MongoDB'nin tam metin arama performans?n? iyile?tirme y?ntemi(Pamukkale Univ, 2022) Mesut, Altan; Ozturk, EmirB-Tree based text indexes used in MongoDB are slow compared to different structures such as inverted indexes. In this study, it has been shown that the full-text search speed can be increased significantly by indexing a structure in which each different word in the text is included only once. The Multi-Stream Word-Based Compression Algorithm (MWCA), developed in our previous work, stores word dictionaries and data in different streams. While adding the documents to a MongoDB collection, they were encoded with MWCA and separated into six different streams. Each stream was stored in a different field, and three of them containing unique words were used when creating a text index. In this way, the index could be created in a shorter time and took up less space. It was also seen that Snappy and Zlib block compression methods used by MongoDB reached higher compression ratios on data encoded with MWCA. Search tests on text indexes created on collections using different compression options shows that our method provides 19 to 146 times speed increase and 34% to 40% less memory usage. Tests on regex searches that do not use the text index also shows that the MWCA model provides 7 to 13 times speed increase and 29% to 34% less memory usage.Öğe Multi-Stream Word-Based Compression Algorithm(IEEE, 2017) Ozturk, Emir; Mesut, Altan; Diri, BanuIn this article, we present a novel word-based lossless compression algorithm for text files which uses a semi-static model. We named our algorithm as Multi-stream Word-based Compression Algorithm (MWCA), because it stores the compressed forms of the words in three individual streams depending on their frequencies in the text. It also stores two dictionaries and a bit vector as a side information. In our experiments MWCA obtains compression ratio over 3,23 bpc on average and 2,88 bpc on files larger than 50 MB. If a variable length encoder like Huffman Coding is used after MWCA, given ratios will reduce to 2,63 and 2,44 bpc respectively. With the advantage of its multi-stream structure MWCA could become a good solution especially for storing and searching big text data.Öğe Multi-stream word-based compression algorithm for compressed text search(Springer Heidelberg, 2018) Ozturk, Emir; Mesut, Altan; Diri, BanuIn this article, we present a novel word-based lossless compression algorithm for text files using a semi-static model. We named this method the Multi-stream word-based compression algorithm (MWCA)' because it stores the compressed forms of the words in three individual streams depending on their frequencies in the text and stores two dictionaries and a bit vector as side information. In our experiments, MWCA produces a compression ratio of 3.23 bpc on average and 2.88 bpc for files greater than 50 MB; if a variable length encoder such as Huffman coding is used after MWCA, the given ratios are reduced to 2.65 and 2.44 bpc, respectively. MWCA supports exact word matching without decompression, and its multi-stream approach reduces the search time with respect to single-stream algorithms. Additionally, the MWCA multi-stream structure supplies the reduction in network load by requesting only the necessary streams from the database. With the advantage of its fast compressed search feature and multi-stream structure, we believe that MWCA is a good solution, especially for storing and searching big text data.Öğe Performance Comparison of JPEG, JPEG2000 & JPEG XR Image Compression Standards(IEEE, 2016) Ozturk, Emir; Mesut, Altan; Carus, AydinIn this study, the performances of JPEG (the most widely used lossy image compression standard until it was published in 1992), JPEG2000 (designed to provide superior image quality at low bit rates) and JPEG XR (aimed to reach the speed of JPEG and the quality of JPEG2000) are evaluated with an application developed in C# language which is able to use different codecs. The results show that recently developed JPEG standard (JPEG XR) is able to compress images with the same quality as JPEG2000, but not the same speed as JPEG.