期刊问答网 论文发表 期刊发表 期刊问答
  • 回答数

    2

  • 浏览数

    121

shuanger2010
首页 > 期刊问答网 > 期刊问答 > 大数据与信息安全论文参考文献

2个回答 默认排序1
  • 默认排序
  • 按时间排序

なかさら

已采纳
信息安全研究所设备管理系统的设计与实现C#023 双击自动滚屏 文章来源:一流设计吧 发布者:16sheji8 发布时间:2009-3-31 12:43:44 阅读:54次 信息安全研究所设备管理系统的设计与实现摘 要 以研究所的设备管理系统为背景,以研究所设备管理模式为研究对象,开发了设备管理系统。设备管理系统是设备管理与计算机技术相结合的产物,根据系统的功能需求分析与定义的数据模式,分析了应用程序的主要功能和系统实现的主要技术。系统主要包括后台数据库的开发和前端应用程序的开发两个方面。考虑了当前设备管理的相关问题,探讨了系统实现的主要相关技术:如数据库技术、C#等。该系统考虑了实际开发的设备管理系统的开发环境、系统流程,主要完成设备类型管理、设备查询、设备档案管理、用户类型管理、用户档案管理,设备征订,设备借用,设备罚款,设备归还以及相应的数据修改,添加,修改等功能。 关键词:管理系统;设备流通;设备管理;用户管理The Design and Implementation of the Equipment Management System for Institute of Information SecurityAbstract Based on the project of the equipment management system, taking the enterprise equipment management model as the research objects, the computer-aided system of the equipment management is The equipment management system is such a system that combines the equipment administration method with computer From building the system architecture and analyzing the function requirement of the system, discussed the main function of the application program and the key technique to implement the The system consists of two aspects, the establishment of database and the development of foreground The relative technique and the main problem existed in the system are taken into The key technique to implement the system, including database and C# is The practical system environment and data flow are considered in Many functions model are implemented in the system, such as equipment type management, equipment selection, equipment file management, user type management, user file management, equipment subscribing, equipment borrowing, equipment penalty, equipment return and the function of the very data imputing, data appended, data delete, 本文来自: 一流设计吧() 详细出处参考:

大数据与信息安全论文参考文献

188 评论(12)

tkj4782

Big data refers to the huge volume of data that cannotbe stored and processed with in a time frame intraditional file The next question comes in mind is how big this dataneeds to be in order to classify as a big There is alot of misconception in referring a term big Weusually refer a data to be big if its size is in gigabyte,terabyte, Petabyte or Exabyte or anything larger thanthis This does not define a big data Even a small amount of file can be referred to as a bigdata depending upon the content is being Let’s just take an example to make it If we attacha 100 MB file to an email, we cannot be able to do As a email does not support an attachment of this Therefore with respect to an email, this 100mb filecan be referred to as a big Similarly if we want toprocess 1 TB of data in a given time frame, we cannotdo this with a traditional system since the resourcewith it is not sufficient to accomplish this As you are aware of various social sites such asFacebook, twitter, Google+, LinkedIn or YouTubecontains data in huge But as the users aregrowing on these social sites, the storing and processingthe enormous data is becoming a challenging Storing this data is important for various firms togenerate huge revenue which is not possible with atraditional file Here is what Hadoop comes inthe Big Data simply means that huge amountof structured, unstructured and semi-structureddata that has the ability to be processed for Now a days massive amount of dataproduced because of growth in technology,digitalization and by a variety of sources, includingbusiness application transactions, videos, picture ,electronic mails, social media, and so So to processthese data the big data concept is Structured data: a data that does have a proper formatassociated to it known as structured For examplethe data stored in database files or data stored in Semi-Structured Data: A data that does not have aproper format associated to it known as structured For example the data stored in mail files or in Unstructured data: a data that does not have any formatassociated to it known as structured For examplean image files, audio files and video Big data is categorized into 3 v’s associated with it thatare as follows:[1]Volume: It is the amount of data to be generated in a huge Velocity: It is the speed at which the data Variety: It refers to the different kind data which A Challenges Faced by Big DataThere are two main challenges faced by big data [2] How to store and manage huge volume of How do we process and extract valuableinformation from huge volume data within a giventime These main challenges lead to the development ofhadoop Hadoop is an open source framework developed byduck cutting in 2006 and managed by the apachesoftware Hadoop was named after yellowtoy Hadoop was designed to store and process Hadoop framework comprises of two maincomponents that are: HDFS: It stands for Hadoop distributed filesystem which takes care of storage of data withinhadoop MAPREDUCE: it takes care of a processing of adata that is present in the HDFSNow let’s just have a look on Hadoop cluster:Here in this there are two nodes that are Master Nodeand slave Master node is responsible for Name node and JobTracker Here node is technical term used todenote machine present in the cluster and demon isthe technical term used to show the backgroundprocesses running on a Linux The slave node on the other hand is responsible forrunning the data node and the task tracker The name node and data node are responsible forstoring and managing the data and commonly referredto as storage Whereas the job tracker and tasktracker is responsible for processing and computing adata and commonly known as Compute Normally the name node and job tracker runs on asingle machine whereas a data node and task trackerruns on different B Features Of Hadoop:[3] Cost effective system: It does not require anyspecial It simply can be implementedin a common machine technically known ascommodity Large cluster of nodes: A hadoop system cansupport a large number of nodes which providesa huge storage and processing Parallel processing: a hadoop cluster provide theaccessibility to access and manage data parallelwhich saves a lot of Distributed data: it takes care of splinting anddistributing of data across all nodes within a it also replicates the data over the entire Automatic failover management: once and AFMis configured on a cluster, the admin needs not toworry about the failed Hadoop replicatesthe configuration Here one copy of each data iscopied or replicated to the node in the same rackand the hadoop take care of the internetworkingbetween two Data locality optimization: This is the mostpowerful thing of hadoop which make it the mostefficient Here if a person requests for ahuge data which relies in some other place, themachine will sends the code of that data and thenother person compiles it and use it in particularas it saves a log to Heterogeneous cluster: node or machine can beof different vendor and can be working ondifferent flavor of operating Scalability: in hadoop adding a machine orremoving a machine does not effect on a Even the adding or removing the component ofmachine does C Hadoop ArchitectureHadoop comprises of two HDFS MAPREDUCEHadoop distributes big data in several chunks and storedata in several nodes within a cluster whichsignificantly reduces the Hadoop replicates each part of data into each machinethat are present within the The of copies replicated depends on the By default the replication factor is Thereforein this case there are 3 copies to each data on 3 differentmachines。reference:Mahajan, P, Gaba, G, & Chauhan, N S (2016) Big Data S IITM Journal of Management and IT, 7(1), 89-自己拿去翻译网站翻吧,不懂可以问
245 评论(14)

相关问答