Jump to content

XiNFiNiTY's Content - Page 13 - InviteHawk - Your Only Source for Free Tracker Invites

Buy, Sell, Trade or Find Free Torrent Invites for Private Torrent Trackers Such As redacted, blutopia, losslessclub, femdomcult, filelist, Chdbits, Uhdbits, empornium, iptorrents, hdbits, gazellegames, animebytes, privatehd, myspleen, torrentleech, morethantv, bibliotik, alpharatio, blady, passthepopcorn, brokenstones, pornbay, cgpeers, cinemageddon, broadcasthenet, learnbits, torrentseeds, beyondhd, cinemaz, u2.dmhy, Karagarga, PTerclub, Nyaa.si, Polishtracker etc.

XiNFiNiTY

Banned
  • Posts

    3,094
  • Joined

  • Feedback

    0%
  • Points

    10,675 [ Donate ]

Everything posted by XiNFiNiTY

  1. Most SEO experts out there have come to recognize how humbling the field of search engine optimization can be. Just when you put together a series of rules or techniques that work well, Google up and turns the world upside down. The nice thing about optimizing a site however is that most of the really important core issues still matter, even after Google makes any change - whether it's big or small. How well your site will do really still depends on a wide variety of factors, but most importantly your keyword usage, the quality of your site - including number of broken links and well-written content, and also toward the top of the list are backlinks from other high quality sites. So you believe that you're doing all of this, but you're really not sure. Sure, some of your pages rank toward the top of Google listings, but how do you know how well your site is optimized as a whole? Obviously, one way to do it is to create a checklist and go through your site one page at a time. Of course, if you're like me and hate that kind of grunt work, then you'll be as pleased as I was to discover SEO Panel. Installing SEO Panel What's SEO Panel? It's a high-quality, free PHP-based app that you can use to audit your entire website, or even your competitor's websites, for the leading SEO issues that most experts agree really matter. This is somewhat similar to Traffic Travis, another SEO tool that I've reviewed. Before you can start using the app, you're going to need a web server where you can install it. On my case, I installed it on my locally installed Xampp web server, but you can download and copy the files to the public area of any web server that has PHP and MySQL installed and enabled. Make sure that the config/sp-config.php file and the tmp directory is writable by everyone. Then, just open up a browser and go to the /seopanel/install/ directory. In my case it would be http://localhost/seopanel/install/ When you run the installation, it will check your server to make sure everything it needs is installed and enabled. Odds are that you may receive the "CURL Support" error above, especially if you've set up your own web server. CURL is usually easy enough to enable just by going to the php.ini file on your web server's PHP installation and removing the ";" from the .dll file extension. Now when you launch the install script, you should see the following screen. Don't forget to change permissions on /config/sp-config.php so that not everybody has write permission. In the next step, you'll be in the admin panel. The default login is spadmin/spadmin. The first step, once you're in the SEO Panel is to set up the websites that you want to monitor. Just click to add a new website and fill in the details. Once you're done, you'll see all of your configured and activated websites in the Website Manager area. You can see just from the picture above how many features this application has, including reports, SEO analysis tools, keyword reports and more. When you click on the reports manager, you'll have the opportunity to request a full analysis of your system based on criteria like current search engine rank, search engine saturization, and number of quality backlinks to your website. You'll get a really good overview of the websites you want to follow when you click on the "SEO Panel" link on the main menu bar. The account summary that shows up will provide current search engine ranks, backlinks, and indexed pages. Next, click on "Keyword Position Checker" and you'll discover a list of tools that make it a lot easier to determine whether your keyword efforts are currently good enough. One of my favorite tools is the keyword "position" checker. Just choose the search engine you want to use, type in a keyword or phrase that you'd like your site to rank highly for, and click Proceed. Just scroll down the listings, and if you see any highlighted entries, then that's a page from the site that you're currently monitoring with the SEO tool. This is a fast and easy way to see if you rank at all well for the topics you are trying to focus on with your site. Another invaluable tool in this PHP program is the "Auditing" area. This section lets you run audits against any website, and will reveal statistics about any site. Just choose the audit types you want to perform and let the system do the work for you. When you do a full audit, be prepared to wait a long time as SEO Panel actually crawls through the entire site to analyze and report back with the relevant data. Each page gets its own line of statistics, and you'll be able to clearly see which pages on your site have the lowest scores and need the most work
  2. We’ve talked a lot in the past about running local web servers for development purposes or to try out software such as Wordpress without having to pay for hosting, but how do you take it live once you’re ready to launch? After a recent question on our tech support community, I was prompted to write about this process in detail, so here goes - a guide to taking your locally developed Wordpress site to a live server. The principles remain the same for any CMS or web software though, but obviously your database structure will be different. Requirements I’m going to assume that you already have a local server running, and are able to both access the files as well as the database through PHPMyAdmin. You’ll also need to already have a hosting account set up and domain name associated with it - we won’t be covering that today. Today I’ll be outlining the case of moving to a shared host without command line access, which I believe will be the most common use case. Bear in mind that if your database is huge, you can’t use this method as you won’t be able to upload large files. Basically, your database export needs to be under 2MB, or things start to get very complicated. Beyond that, you’ll need to do partial exports, or use the command line. However, this 2MB doesn't include your actual files - it's just the textual content of the database. So unless you have a few thousand posts in your Wordpress, you should be just fine. 1. Prepare Make sure you know your database user, database name, and password for both your offline and online server accounts beforehand. Also note down the URL and file paths you’ll need to adjust later - for instance, your files may be linked using an absolute path such as c:/sites/wordpress/wp-content/uploads/2011/05/test.jpg or http://localhost/wp-content/uploads~ . Note down everything up to the wp-content section, as this is what you’ll need to adjust later. The URL you use to access the site locally may also be different to the file paths of pictures you’ve uploaded, so make sure you note that down too - we’ll be adjusting them both later on. 2. Export The Database Open your local PHPMyAdmin panel and navigate to the correct database if you have a few. Select the export tab Click Select All to ensure all tables are selected. Check Add DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT is selected (it isn’t by default) Check Save as file Enter the filename template as something memorable (I chose "export") Click Go to save the file somewhere. 3. Adjust Paths This is the difficult bit, and you may need to come back and do it again if you miss something. Make a copy of the SQL file first in case you mess it up. Open up the SQL file you just saved in a good text editor. By opening the entire file like this, we can just search and replace paths/URLs all at once, without needing to adjust settings through the Wordpress admin panel or having to use complicated SQL commands. Do a simple search first to make sure you have, and look for the previous offline domain you were running the site on. Just check you have the search variable entered correctly first. Taking note of trailing slashes, perform a full search/replace on every occurrence of that item in the file. So for example, if you previously set everything as localhost, then replace all instances of localhost with yourdomain.com. If you were using Windows, you may find your image paths are using the c:/~ notation, so replace that with your domain address too. A good rule of thumb is to check before you actually replace - just FIND the paths before you start adjusting them. 4. Upload Files Open up an FTP connection to your live server and upload the contents of your offline Wordpress folder into the httpdocs or public_html folder there. Assuming you will be installing to the root of your live server, you should be able to see the wp_contents folder inside the public_html web server root now. <str=ong>Warning: If you’re coming from Windows, there may be some serious security issues with permissions. After you’re up and running, install the Wordpress Security Checker to do an automated scan on folder permissions. Note: On GoDaddy hosting, it may be easier to install Wordpress using the control panel instead of uploading all your offline files - GoDaddy often makes the database connection settings difficult. In this case, you only really need to upload the contents of the wp-content directory, then continue to re-import the database. 5. Re-Import The Database Firstly, zip up the SQL file you made earlier, and make sure it’s less than 2MB. Then open up the PHPMyAdmin on your live server. You should see a heading for import. Click there, choose your modified and zipped up SQL file and upload. 6. Edit wp-config.php (Not necessary if you used the Fantastico / application control panel installer). In the root of your directory is the Wordpress config file. Open it up and edit the appropriate lines for "database name", "database user", and "database password". That’s it! All done. Everything should be working at this point, but you may discover now that the paths you entered during the SQL editing stage were actually wrong - don’t panic though, it’s easy to just re-edit the original backup and upload again until you have it right, and soon you’ll have mastered the process. Any problems? Of course, I’ll try my best to help in the comments, but I can only point you in the right direction rather than give specific answers. You might also want to ask in our fantastic and lively tech support community part of the site, which is where this article was set in motion in the first place.
  3. The web as we know it is evolving faster than ever before. As of late, HTML5 is coming into the scene, providing the capability of developing highly interactive web apps without the need for the proprietary Flash. Instead, all a user needs is a supported, modern web browser, and they'll be able to enjoy the best of what the web has to offer. However, creating such interactive content is never as easy as pie, and that rule doesn't exclude HTML5's main element that does all the work: canvas. If you've been following what the latest browsers have to offer, some of them may include a GPU-accelerated experience that makes the canvas element run a lot better. There are frameworks available that try to make the canvas element a little bit easier to develop for, notably jQuery. But even then, jQuery makes you type quite a bit. Web developers, here's something better for you. About jCanvas jCanvas is a little jQuery plugin written entirely in JavaScript that makes working with jQuery, and thereby HTML5's canvas element, a lot easier. Web developers will get a lot of benefit out of using jCanvas. By using jCanvas, you get to work with much simpler code, in which the plugin will do the work and translate it into the relevant code for jQuery to run. Examples of Use jCanvas can draw a large number of objects. For example, here we can see an ellipse that is filled in with a gradient. There are plenty of parameters that you can set and still keep the amount of actual code as small as possible. In this example, the gradient parameters were set first (distances, colors, etc.), followed by the drawing of the ellipse itself. For programmers, this should be a very eye-appealing way to write code with no over-the-top syntax. In this example, a regular jpg image is being halfway inverted. The first function (or set of instructions) sets how the inversion takes place, while the second function draws the image and loads the inversion function onto it. When the code runs, you get a halfway-inverted image. In our final example, different shapes are drawn by means of different functions provided by jCanvas. The green, unfilled rectangle was drawn by a simple function dedicated to rectangles. As always, you can customize your rectangle, even with parameters for the stroke width and corner radius (the amount that the corners should be rounded). The pentagon is drawn by a more generic function that applies to all regular polygons. You can also apply all the same parameters as with the other functions. The difference is important because you can draw a square with both the rectangle and polygon functions, but you can only draw rectangles with the rectangle function. (Squares are rectangles, but rectangles aren't always squares!) Other Information & Support There's a lot more you can do with each function, plus there are many more functions that you can use! You can download jCanvas by going here. If you need any help, the full, well-written Documentation page should clarify most if not all questions. If that still doesn't work, you can contact the developer of jCanvas by checking out his information here. If you wish to try out jCanvas before playing around with it on your own site, the developer has set up a very cool-looking Sandbox page where you can enter code and watch the magic happen. Finally, if you would like to help contribute to the open source jCanvas project, you are more than welcome to do just that by going here. jCanvas is always being improved by the developer, and new releases are made available every few weeks. Conclusion jCanvas is a great web tool to use to make your programming experience much simpler, especially if you are a heavy user of the canvas element. Again, some of the highlight features are: Draw shapes, paths, images, and text. Style these using colors, gradients, patterns, and shadows. Manipulate the canvas (rotate, scale, etc). A huge range of options to suit your needs. Internet users will thank you as well for using jCanvas, because you'll have more time to completely develop your web app and make sure that it has all the functionality you want it to have while enjoying great performance. Are you a web developer who is involved in HTML5? Do you think jCanvas will help you with your development? What features would you like to see in jCanvas? (please check the documentation first for what's already implemented!).
  4. You know, something like adding a print button to a web page sounds pretty simple, right? In fact, why do we even need to add any print button or link to the page at all, when all the reader has to do is click on "File" and "Print..." in the browser menu? Ultimately, different people want the print feature on their webpage for different reasons. You may just want to add convenience. When the reader can just click a button to get a printout - it saves a few clicks, and every click counts. Other people want to customize the printed text - in other words, hide certain elements of the page from the printout. In other situations, people prefer creating a carefully customized, printable version of the website. For each of these situations, there are different solutions. We've always tried to offer innovative print solutions here at MUO, such as Justin's article about printing on half-pages and Karl's article on PrintWhatYouLike. In this article I'm going to provide four ways that you can integrate a printing button or link into your website - from the very simple HTML and Javascript approach, to the more customizable CSS approach. Integrating Printing Into Your Website When you look at any webpage, it's pretty easy to see why you may want to customize the printout. A typical webpage has ads, banners, ad links, sidebars and footer sections that do nothing more than eat up page space and wastefully consume paper. If you have a fairly simple website, or you don't really care whether or not all of the graphics and formatting prints, then all you have to do is add a simple button to your webpage and use the "print()" javascript method to send the webpage to the printer. Placing this code into your site at a location that's quick and easy for your readers to use looks something like this. All the reader has to do is click the button and the page will get sent directly to the printer without any page formatting whatsoever. If the page is beyond the printable width for the printer, it's possible you could end up printing out far more pages than is actually necessary. Some people don't really like the appearance of a form button, so as an alternative you can simply use a link with the embedded javascript to do the exact same thing. You can see how in many cases a simple text looks much cleaner than a button, but which you use really comes down to which looks better in the area of the webpage where you want to provide the print feature. As you can see from the example printout above, the formatting of many ads and banners don't perfectly match the browser display when you simply use the print command. This becomes much more apparent for more complex websites. Another approach people use is to block entire sections of the website using CSS, and assign specific sections of the page to print. You do this by first linking the CSS file in the header section. Next, you'll need to create the actual CSS file and save it in the same directory as your webpage. The CSS file should assign all of the sections of the page that should not be printed, and then also make the elements of the page you assign to print as visible. DIV#header, DIV#newflash, DIV#banner {display: none;} body {visibility:hidden;} .print {visibility:visible;} Now that your CSS file is set up, all you have to do is go through your page and tag each section with the "print" class. This line will be printed. This line won't. Now you can see in the printout where only the sections of the page marked with the "print" class get printed to the page, and all other sections don't. The one difficulty with this approach is that you have to make sure to turn off the display for all DIV sections that you don't want printed. As you can see below, I didn't add the "div" section for the Google Ad, so that still printed. It can take some time to build the CSS file and lay out the classes correctly. If you really don't want to bother doing this on every page, then you may opt for one last approach. This is my favorite technique for providing perfectly formatted, printable versions of the webpage. All you have to do is create a PDF formatted version of the webpage content, save it on your web host, and then link the file in the header of the page. That's all you have to do! You can embed the print button on your site just like in the examples above, but instead of the CSS file loaded for the print method, the PDF, DOC or other file is sent to the printer. As you can see below, this generates the cleanest printable version of your page, and you can pretty much customize it to look exactly how you want it to.
  5. A coalition of Hollywood studios, with the addition of Amazon and Netflix, are demanding $16.35m in damages from the operator of Altered Carbon, Area 51, and several other pirate IPTV services. In addition to a permanent injunction, they also seek execution of an earlier settlement agreement that wasn't honored plus $332,600 in attorney's fees. Early July, Warner Bros., several Universal companies, Amazon, Columbia, Disney, Netflix, Paramount, Sony, and other content creators filed a lawsuit against Jason Tusa, the alleged operator of Altered Carbon, Area 51, and other pirate IPTV services. According to the complaint filed in a California court, Tusa is well known to the plaintiffs. In 2020 his Area 51 service was shut down following an Alliance For Creativity and Entertainment (ACE) cease-and-desist letter. A settlement proposal included a clause that Tusa couldn’t launch or be involved with any similar services. While Area 51 was shut down before the proposed settlement was signed, the plaintiffs claim that Tusa then launched a clone service called SingularityMedia which took on Area 51’s customers. ACE responded by contacting Tusa again, demanding that the new service should be shut down. It later disappeared. A confidential settlement was reached in October 2020 but it’s claimed that the defendant then launched Digital UniCorn Media and another service called Altered Carbon. At this point, ACE ran out of patience and responded with the current lawsuit alleging direct and willful copyright infringement, contributory copyright infringement, and inducement of copyright infringement for more than 100 copyright works. Tusa failed to respond to the lawsuit and last month United States District Court Judge Virginia A. Phillips handed down a preliminary injunction that restrains Tusa and takes action against the domain names used to operate his services. Plaintiffs File Motion For $16.35m Default Judgment In a motion for default judgment filed yesterday, the plaintiffs now seek maximum statutory damages for willful infringement of $150,000 per copyright work for a total of $16,350,000. They also seek execution of the settlement sum previously agreed with Tusa (details of which are confidential), a permanent injunction, interest and attorney’s fees of $332,600. “Tusa is the individual responsible for, and he directly operated, managed, and ultimately profited from, the willful infringement of Plaintiffs’ copyrights in their movies and television shows…through a string of unauthorized movie and television streaming services. Settled law permits entry of default against willful infringers like Tusa who make a strategic decision to not defend their conduct in court,” the motion reads. While this lawsuit deals with a relatively low number of copyrighted works, the studios say that through his unlicensed platforms, Tusa infringed their rights in many more movies and TV shows. While including more titles had the potential to make the case more unwieldy, it’s clear that the $16.35m demand represents just a fraction of the damages the studios could have claimed. “A complete accounting of the scope of Tusa’s infringement would undoubtedly run to thousands of Copyrighted Works,” they write. “Tusa flaunted his wealth from the infringing services on social media, including posting about the purchase of a luxury car with an AREA 51 vanity plate that he said he would decorate with ‘Rick And Morty’ theme. Presumably, Tusa paid for his new car with the ill-gotten proceeds of his infringement.” Due to Tusa’s previous conduct, the studios remain concerned that he will resurrect his services if the court does not restrain him. “Tusa did shut down Altered Carbon after Plaintiffs filed this action — just as he has done when confronted previously = but his actions confirm he will not refrain from further infringement absent an injunction. Based on Tusa’s repeated actions, it is clear that if he is not enjoined, Tusa will simply rebrand his service and start his infringing conduct all over again,” the studios add, demanding a permanent injunction. Permanent Injunction One of the key aims of the proposed injunction is to prevent Tusa from engaging in similar conduct, such as operating any of the named services or any that may appear in the future utilizing the plaintiffs’ copyrighted works. To this end the studios also demand an order preventing Tusa (and anyone acting in concert with him) from taking any steps to “release publicly, distribute, transfer, or give any source code, object code, other technology, domain names, trademarks, brands, assets or goodwill” in any way related to the named services. The studios also wish to take control of the domains alteredcarbon.online, 2pmtoforever.com, catchingbutterflies.host, stealingkisses.me, dum.world, twoavocados.us, plus any other domain that has been used for infringement of their rights. Breach of Contract As noted, Tusa reached a settlement agreement with the studios on October 12, 2020. According to the studios, they upheld their part of the deal but Tusa did not. The financial aspect of the settlement is confidential but whatever the amount, the studios now want to call in the debt. “Tusa materially breached that agreement when he subsequently launched his follow-on infringing IPTV streaming services, including Altered Carbon. Plaintiffs suffered both irreparable harm and concrete damage in additional costs to bring Tusa into compliance. Tusa is therefore liable for the confidential Settlement Sum,” they inform the court.
  6. Earlier this month Russian telecoms watchdog Roscomnadzor said it would begin blocking VPN providers including NordVPN, ExpressVPN and IPVanish to prevent access to information the government wishes to censor. It now appears that multiple online services have been disrupted including BitTorrent and Twitch, with multiple parties pointing the finger towards Russia's blocking tools. For the past several years as part of the country’s website blocking efforts, Russian authorities have warned that VPN providers could be next on the list. The problem according to Russia is that these services can provide access to material it prefers citizens not to see, everything from pirated content right through to terrorist propaganda. In the view of the authorities, VPN providers should cooperate with the government but many are unhappy to do so, especially if that involves any type of monitoring or censorship of services that Russia deems offensive. After making broad threats against a range of services in 2019, Russia made good on its warnings by blocking two providers, VyprVPN and OperaVPN. Then, earlier this month, local telecoms watchdog Roscomnadzor said it would block several more including Nord VPN, ExpressVPN, IPVanish, Hola! VPN, KeepSolid VPN Unlimited, and Speedify VPN. Russia Anticipated There Would Be Problems In advance of blocking the providers listed above, Russia reached out to the banking sector to ensure that any blocking wouldn’t hurt their activities. The Central Bank then contacted related companies asking them to confirm the names of the VPN services they use, if any, along with the purpose of that use and any known IP addresses. According to a report from RBC, Roscomnadzor advised that it planned to “implement a set of measures to restrict the use of services,” and the information was needed “in order to exclude VPN connections from access restriction policies.” According to Roscomnadzor, it received responses from 64 industry organizations, 27 of which use the mentioned VPN connections to support 33 technological processes. “More than 100 IP addresses were presented in order to exclude them from access restriction policies,” the watchdog reported. Despite these efforts, however, it appears that Russia’s attempt at blocking the providers may have overstepped the mark. Disruption Reported On Multiple Online Services After the new blockades came into effect, multiple online services reported that they were suffering connectivity issues. According to a Kommersant report, these include the game World of Tanks, gaming streaming service Twitch, FlashScore (a service used to access football scores and results), and even BitTorrent transfers. The operators of MMO game World of Warships posted to their portal to explain the problems. “In early September, by order of Roscomnadzor, Internet providers began blocking VPN services. DPI equipment is used to execute orders by providers,” they write. “In the process of blocking VPN services, many UDP ports were affected, including those that have been used in our game since the start of the very first alpha testing. This situation has affected not only large backbone providers, but also many local ones, of which there are a huge number on the territory of Russia.” World of Warships says that the blocking of UDP ports prevented people from logging into their game and also caused disconnections for people already playing. Those affected should contact their ISPs, the company says, but whether this is yielding positive results is unknown. Twitch did not respond to a request for comment but FlashScore says that it too has experienced problems. However, despite investigations, it had yet to determine what had caused the technical issues. Roscomnadzor Rejects Blame, ISPs Aren’t So Sure Russia’s telecoms watchdog says that despite claims to the contrary, it believes that the network issues did not appear as a result of its work. “When implementing measures to block VPN, the specified UDP ports were not blocked,” a spokesperson said. Sources inside several ISPs in Russia aren’t so sure. “[S]ources in the Big Four operators said that they had already tested their own networks and that the reason for the difficulties was the operation of the TSPU equipment (technical means of countering threats), which Roskomnadzor installed on the networks within the framework of the law on ‘Sovereign RUnet‘,” Kommersant reports. Blocking Providers Just One Part of Russia’s Stance Towards VPNs As reported back in June, Russia is attacking VPNs on multiple fronts. Every week, Roscomnadzor sends orders to Google to remove hundreds of URLs of sites and services that reportedly allow access to pirated content. Unfortunately, Russian law does allow Google to share the precise URLs being targeted but searches on the Lumen Database confirm the existence of takedowns affecting more than half a million links in the past two years.
  7. As part of the Central Bank of Nigeria (CBN)’s digital currency charm offensive, Folashodun Shonubi, the institution’s deputy governor, has claimed that the country’s upcoming central bank digital currency (CBDC) will be a “safer option from privately issued cryptocurrency.” Payment System Stability In addition, the CBN’s digital currency — also known as the e-naira — is expected to complement current areas of payment options. This, according to Shonubi, will ensure “the stability of the payment system in the long run.” Meanwhile, in his other remarks as reported by Nairametrics, Shonubi suggested that the CBDC will carry the same promises and have the same functions as the fiat naira. The deputy governor explained: The central bank digital currency offers all the benefits of cash but in digital form. Every single digital currency is an electronic version of the cash, the legal tender. When you make a cash payment, settlement is done instantly; digital currencies entail the same promises and even more. When fully implemented, Shonubi said he expects to see “rapid inclusion rates” in the coming days. Also, when the CBDC rollout is complete, the e-naira will be distinguishable from privately issued cryptocurrencies that have so far “been used for investment.”
  8. Anticipation has been running high among players at Bitcoin.com Games for their favorite games from the leading software provider, NetEnt. Coming off of the recent addition of a range of live casino games from Evolution Gaming and iSoftBet, the popular casino is now bringing some of the most entertaining titles from this iGaming developer that players have come to cherish over the years. With in-demand slots like Dead or Alive 2 Feature Buy, Ghost Pirates and Narcos, as well as the space arcade styled slot Starburst and beautifully designed Gonzo’s quest, the newly added range of games from NetEnt promise a long and enjoyable gaming session enriched with bountiful of chances to win Bitcoin. Hundreds of paylines and thousands of multipliers await for players to spin the reels and open possibilities to take home a fortune. Bitcoin.com Games is also home to their very own exclusive games that are only available for play on the turf of this homegrown casino from Bitcoin.com. From the most played slot game The Angry Banker, to the jackpot loaded Exclusive Slots, players can enjoy classic casino games in their ultimate forms on the most trustworthy crypto casino on the planet! Bitcoin.com Games members can look forward to a highly premium gaming experience along with receiving curated VIP offers that carry generous rewards for all levels of players. An excellent 24/7 support staff also ensures a smooth and entertaining casino session.
  9. Nasdaq-listed Microstrategy has purchased 5,050 more bitcoins for $243 million, raising its total bitcoin holdings to 114,042 coins. Microstrategy Continues to Grow Its Bitcoin Stash The pro-bitcoin software company Microstrategy announced Monday that it has purchased more bitcoins. CEO Michael Saylor tweeted: Microstrategy has purchased an additional 5,050 bitcoins for ~$242.9 million in cash at an average price of ~$48,099 per bitcoin. As of 9/12/21 we hodl ~114,042 bitcoins acquired for ~$3.16 billion at an average price of ~$27,713 per bitcoin. The company also informed the U.S. Securities and Exchange Commission (SEC) about its bitcoin purchase Monday. The filing states that in the third quarter Microstrategy “purchased approximately 8,957 bitcoins for approximately $419.9 million in cash, at an average price of approximately $46,875 per bitcoin, inclusive of fees and expenses.” The 8,957 BTC figure includes the 3,907 BTC purchase that was announced in August. Last week, Saylor revealed that his company avoided “a multi-billion dollar mistake” by choosing to invest in bitcoin instead of gold.
  10. Majority of all the servers running across the globe use a Linux-based operating system. It comes as no surprise that Linux is one of the most prevalent operating systems that developers prefer to use. Alongside these servers, databases also play a crucial role in the web infrastructure. As a developer, you might be inclined to run PostgreSQL, a popular relational database, on your local Linux machine. Here's how you can install pgAdmin, an easy-to-use GUI tool that can help you manage these databases on Linux. What pgAdmin Has to Offer It is essential to have PostgreSQL installed and configured on your Linux distribution before you can start using this tool to manage your databases. This GUI tool acts as an easier medium for you to interact with the database without having to delve into the command-line interface. Here are some nifty features that pgAdmin provides: Powerful query tool with color syntax highlighting Fast datagrid for display/entry of data Graphical query plan display Auto-vacuum management Monitoring dashboard Backup, restore, vacuum, and analyze on-demand Installing pgAdmin on Ubuntu Open up a terminal emulator of your choice and start by adding the pgAdmin public key using the following command: sudo curl https://www.pgadmin.org/static/packages_pgadmin_org.pub | sudo apt-key add Once done, run the command given below to create the repository configuration file: sudo sh -c 'echo "deb https://ftp.postgresql.org/pub/pgadmin/pgadmin4/apt/$(lsb_release -cs) pgadmin4 main" > /etc/apt/sources.list.d/pgadmin4.list && apt update' Output: You can either opt to install only the desktop mode, web mode, or both modes depending upon your requirements. Pick one of the commands given below that suits your needs: Desktop-only mode: sudo apt install pgadmin4-desktop Web-only mode: sudo apt install pgadmin4-web Both modes: sudo apt install pgadmin4 In case you chose to install the web mode, you will need to configure the web server by running the setup script: sudo /usr/pgadmin4/bin/setup-web.sh Output: With that, you're ready to use pgAdmin to manage and interact with your SQL database and perform various database operations with ease. Database Management Made Easy Configuring a database according to your project needs is hard enough but it doesn't have to be harder to manage your data, thanks to pgAdmin. Whether it's your local database or a remote database hosted on the cloud, you can use this tool to manage your data across multiple platforms. Choosing the right database is not an easy decision to make. Looking for a suitable database for your upcoming project? Here are some database engines that you should consider.
  11. Relational Database Management Systems (RDBMS) can store a large amount of data using the tabular arrangement of a database. RDBMS are widely used to perform database operations like creating, administering, and managing small and large workloads. PostgreSQL is a fantastic tool to use, but it can be a little daunting to get it up and running in Windows. As such, let us guide you through how to set up PostgreSQL on Windows and get started with your database as soon as possible. What You Need to Know About PostgreSQL PostgreSQL is a database management software based on SQL. This enterprise-level software is known for its versatility and scalability. Its flexibility allows it to handle different levels of workloads from single and multiple machines simultaneously. Even better, it can function seamlessly with an entire warehouse of concurrent users. PostgreSQL has earned a strong reputation for its proven architecture, reliability, data integrity, robust feature sets, extensibility. The dedication of the open-source community behind the software allows this software to deliver performant and innovative solutions consistently. How to Install PostgreSQL on Windows The PostgreSQL installation process on Windows is slightly different from its Linux counterparts. You need to install the Postgre Database Server and a graphical tool to administer the database. While you can download both of them separately, you would still need to configure them together, which can be a challenge of its own. It is, therefore, best to download and install a bundled installer. To kickstart the installation, visit the official PostgreSQL website and select Download. On the next page, select Windows since we are downloading a compatible version for Windows OS. On the Windows Installer page, click on Download the Installer. Under the Platform Support section, you will notice some relevant information for each of the released versions. It’s best to note the latest version available for Download. Clicking on Download the Installer brings you to the PostgreSQL Database Download page. Depending on the version of your computer, you can choose between Windows x86-64 or Windows x86-32. Select the latest PostgreSQL version from the dialogue box and click on the download button next to it. This should start the setup download for you. Once the EXE file downloads, click on it to begin the setup. The setup will ask you about the destination directory and component details. From the list of components, you can choose from the following: PostgreSQL Server pgAdmin4 Stack Builder Command Line Tools RELATED:How To Install And Set Up Microsoft SQL Server On Ubuntu It's a good idea to check all four boxes, as each application will be useful in the near future. On the next screen, you will be to set up a super password for the database superuser. Create a password and then click Next. On the next screen, leave the port number unchanged and click Next. You should see a pre-installation summary that lists all the details you've set up. Review each aspect of the installation, and if everything looks fine, click on Next. The Ready to Install dialogue box will appear. Click on Next to begin the installation. Connecting to PostgreSQL with pgAdmin4 There are two ways to connect PostgreSQL to a server. You can either use the conventional command-line method or the pgAdmin tool that comes preloaded after the installation process on Windows. Connecting to PostgreSQL Using the pgAdmin Application Launch the pgAdmin application from the program files folder or using the Windows Search feature. Log in to the pgAdmin client using the master password that you used during the installation process. Click on the Create Server option and fill in necessary details like Host, Port, Maintenance Database, Username and Password. Click on the Save option. The created server is now visible on the left side tab. Double click on the server’s name and enter the password to connect to the PostgreSQL server. Connecting to PostgreSQL Using the Command Window Post-installation, you can search for the SQL shell (PSQL) in the Start menu. This is where you will enter any relevant SQL commands. To list all the available databases with PSQL, type in \l and hit Enter. How to Create a New Database in PostgreSQL To Create a New Database, type CREATE DATABASE test, where test is the name of the database. To access the new database, close the PSQL terminal and reopen it again. The application will remember the server name, port, user name, and password you used last time. Before you reconnect, change the Postgres name to your set databases' name, then press Enter. How to Create and List Tables in PostgreSQL To create a table within an existing database, use the following command: CREATE TABLE PERSON ( ID BIGSERIAL NOT NULL PRIMARY KEY, NAME VARCHAR(100) NOT NULL, COUNTRY VARCHAR(50) NOT NULL ); This command will create a table person within the database test and add a few variable names to it as well. Tweak these variables to suit your own needs. To list all tables in a database, use the \dt command. If you use this command with the above example, you will notice there is only one table called Person in the database Test. RELATED:How To Create A Table In SQL How to Modify the Root User Credentials You can change the Postgres password after logging in as the root user. To do this, use the following command: ALTER USER postgres PASSWORD 'newpassword'; Change newpassword to the password of your choice. Creating and Removing a User Role in PostgreSQL Many people work simultaneously on a project with different roles. You can create different roles that have different accesses in PostgreSQL by using the Windows console. You can also choose whether to grant a superuser status to the newly created role. To grant someone access, run the Windows console and change the default directory to the PostgreSQL bin directory (For instance, C:\Program Files\PostgreSQL\9.0\bin) or add this directory to the Path environment variable. Now use the following code in the console: createuser.exe --createdb --username postgres --no-createrole --pwprompt openpg You can modify the commands to change the role privileges. You will be prompted to choose the superuser status for the role. Enter y for Yes or n for No and then assign a password to create the new role. You can remove a user role from the list of other users using the following command: DROP USER name [, ...]; Working With PostgreSQL in Windows PostgreSQL is an incredible tool to manage databases reliably and in a foolproof manner. The Windows installation process is relatively simple and requires only a few clicks to get set up and running.
  12. Relational database management systems (RDBMS) have proven to be a key component of many websites and applications, as they provide a structured way to store, organize, and access information. In this article, we will discuss PostgreSQL in detail, along with a step-by-step guide on installing and configuring PostgreSQL on Ubuntu. What Is PostgreSQL? PostgreSQL is an open-source database management system that supports SQL. Using PostgreSQL, developers can build fraud-tolerant applications as it provides excellent data management resources to the database administrator. This platform gives you the mobility to define your own data sets, develop custom fonts, and merge code written in different programming languages. PostgreSQL is highly scalable in terms of data quantities and the number of concurrent users on a project. Let’s look at the PostgreSQL installation process for Ubuntu 21.04. Step 1: Install PostgreSQL on Ubuntu Some PostgreSQL packages are present in the default Ubuntu repository. To install PostgreSQL via the command line, type: sudo apt install postgresql postgresql-contrib Verify the Installation You can find the location of the configuration file using the ls command. This is a verification step that confirms whether PostgreSQL was successfully installed on your system or not. ls /etc/postgresql/12/main/ The number 12 denotes the version of PostgreSQL. It might be different for you depending on the package you've downloaded on your system. Check the PostgreSQL Status After installation, check the status of PostgreSQL using the following command: service postgresql status The output would look like this: If the output displays the active status, then the PostgreSQL service is running on your system. On the other hand, if the status is inactive, then you need to start the service by typing: service postgresql start Apart from status and start, there are several other PostgreSQL commands that you can use: Stop Restart Reload Force-reload RELATED:Database Engines To Consider For Your Next Project Step 2: Log In As a Super-User Before proceeding further, you need to log in as a database superuser on the PostgreSQL server. One of the simplest ways to connect as a PostgreSQL user is to change your hostname to the postgres Unix user. Set Root User Credentials Login to PostgreSQL interactive shell using the command: sudo -u postgres psql Set the root user credentials using the following query: ALTER USER postgres PASSWORD 'newpassword'; Make sure to replace newpassword with a strong password of your choice. Type exit to quit the interactive shell. Login to psql with the following command: psql -U postgres -h localhost Enter the new root password for the user when the prompt appears. Step 3: Connect to the PostgreSQL Server When you install PostgreSQL, the platform creates a default user postgres and a system account with the same name. You need to log in as the user postgres to connect to the PostgreSQL server. Use the following command to log in to the PostgreSQL server: sudo su postgres As soon as you run this command, you will notice a change in the way the system displays your hostname. The bash prompt will look like this: postgres@ubuntu: /home/winibhalla/Desktop$ This shows that you have successfully logged in as a PostgresSQL user. How to Manage PostgreSQL Users Now that you have connected to the server, it is time to create new users. Type psql to start running commands on the PostgreSQL server. Create a New User If there are multiple team members working on different levels within a project, you will need to create different roles for different employees and assign them their accesses. Use the CREATE USER command to create a new user profile: CREATE USER user1 WITH PASSWORD 'test123'; In the command above, user1 is the username you want for the new user followed by test123, which is the password for this user. To check the list of new users added to a database, use the \du command. As you can see in the output above, there are no privileges available for the new user yet. Grant Superuser Privileges to New Users To add a set of privileges to a new user, run the following command: ALTER USER user1 WITH SUPERUSER; The ALTER command will grant administrative privileges to the new member. Run the /du command again to verify if the new user has the required set of superuser privileges. Drop a User From the List of Users To remove a user from the list of authorized users, use the following command: DROP USER user1; Verify the change by listing out the users with the /du command. RELATED:The Essential SQL Commands Cheat Sheet For Beginners How to Manage PostgreSQL Databases PostgreSQL provides its users with several commands to create and remove databases. Add or Remove a Database To create a new database using PostgreSQL: CREATE DATABASE db1; ...where db1 is the name of the database you want to create. Use the \l command to get a list of all the available databases. Output: If you want to remove a database, use the DROP command: DROP DATABASE db1; Grant Database Access to Users You can grant database access to a user using the GRANT command: GRANT ALL PRIVILEGES ON DATABASE db1 TO user1; Get Command-Line Help for PostgreSQL To know more about PostgreSQL and how to use its various commands, you can open the help page by typing the following command in the terminal: man psql Recommended Step: Install pgAdmin Another recommended step is to install pgAdmin. PgAdmin is one of the most popular and feature-rich open-source administration tools available for PostgreSQL. While installing pgAdmin is an optional step, you should install it to manage users and databases in a better way. To start, add the official pgAdmin repository and its key to your system: curl https://www.pgadmin.org/static/packages_pgadmin_org.pub | sudo apt-key add sudo sh -c 'echo "deb https://ftp.postgresql.org/pub/pgadmin/pgadmin4/apt/$(lsb_release -cs) pgadmin4 main" > /etc/apt/sources.list.d/pgadmin4.list && apt update' Output: Now, to install the desktop version: sudo apt install pgadmin4-desktop To install the web version, type: sudo apt install pgadmin4-web To configure web mode, run the setup-web.sh script provided by pgAdmin: sudo /usr/pgadmin4/bin/setup-web.sh Follow the on-screen instructions to complete the process. Rest assured, this is just a one-time step, so you don't have to worry about installing and configuring this again and again. Managing Databases on Ubuntu Using PostgreSQL PostgreSQL is a powerful platform for creating database management applications. The ability to process any quantity of data on the platform is one of its biggest highlights. The installation process boils down to the initial downloading, installing, and finally logging in to the database. With a few simple commands, you can master the process of adding new users, creating databases, and further on adding users to existing databases. Not sure if you like PostgreSQL? Try installing Microsoft SQL Server on your machine.
  13. Database Index" refers to a special kind of data structure that speeds up retrieving records from a database table. Database indices make sure that you can locate and access the data in a database table efficiently without having to search every row each time a database query is processed. A database index can be likened to a book’s index. Indices in databases point you to the record you're looking for in the database, just like a book’s index page points you to your desired topic or chapter. However, while database indices are essential for quick and efficient data lookup and access, they take up additional writes and memory space. What Is an Index? Database indexes are special lookup tables consisting of two columns. The first column is the search key, and the second one is the data pointer. The keys are the values you want to search and retrieve from your database table, and the pointer or reference stores the disk block address in the database for that specific search key. The key fields are sorted so that it accelerates the data retrieval operation for all your queries. Why Use Database Indexing? I'm going to show you database indices in a simplified way here. Let’s assume you have a database table of the eight employees working in a company, and you want to search the information for the last entry of the table. Now, to find the previous entry, you need to search each row of the database. However, suppose you've alphabetically sorted the table based on the first name of the employees. So, here indexing keys are based on the “name column.” In that case, if you search the last entry, “Zack,” you can jump to the middle of the table and decide whether our entry comes before or after the column. As you know, it'll come after the middle row, and you can again divide the rows after the middle row in half and make a similar comparison. This way, you don't need to traverse each row to find the last entry. If the company had 1,000,000 employees and the last entry was “Zack,” you would have to search 50,000 rows to find his name. Whereas, with alphabetical indexing, you can do it in a few steps. You can now imagine how much faster data lookup and access can become with database indexing. RELATED: 13 Most Important SQL Commands Any Programmer Should Know Different File Organization Methods for Database Indexes Indexing depends heavily on the file organization mechanism used. Usually, there are two types of file organization methods used in database indexing to store data. They are discussed below: 1. Ordered Index File: This is the traditional method of storing index data. In this method, the key values are sorted in a particular order. Data in an ordered index file can be stored in two ways. Sparse Index: In this type of indexing, an index entry is created for each record. Dense Index: In dense indexing, an index entry is created for some records. To find a record in this method, you first have to find the most significant search key value from index entries that are less than or equal to the search key value you're looking for. 2. Hash File organization: In this file organization method, a hash function determines the location or disk block where a record is stored. Types of Database Indexing There are generally three methods of Database Indexing. They are: Clustered Indexing Non-clustered Indexing Multi-Level Indexing 1. Clustered Indexing In clustered indexing, one single file can store more than two data records. The system keeps the actual data in clustered indexing rather than the pointers. Searching is cost-efficient with clustered indexing as it stores all the related data in the same place. A clustering index uses ordered data files to define itself. Also, joining multiple database tables is very common with this type of indexing. It's also possible to create an index based on non-primary columns that are not unique for each key. On such occasions, it combines multiple columns to form the unique key values for clustered indexes. So, in short, clustering indices are where similar data types are grouped and indices are created for them. Example: Suppose there’s a company that has over 1,000 employees in 10 different departments. In this case, the company should create clustering indexing in their DBMS to index the employees who work in the same department. Each cluster with employees working in the same department will be defined as a single cluster, and data pointers in indices will refer to the cluster as a whole entity. RELATED:What Are Foreign Keys In SQL Databases? 2. Non-clustered Indexing Non-clustered indexing refers to a type of indexing where the order of the index rows is not the same as how the original data is physically stored. Instead, a non-clustered index points to the data storage in the database. Example: Non-clustered indexing is similar to a book that has an ordered contents page. Here, the data pointer or reference is the ordered contents page which is alphabetically sorted, and the actual data is the information on the book's pages. The contents page doesn't store the information on the book's pages in their order. 3. Multi-level Indexing Multi-level indexing is used when the number of indices is very high, and it can't store the primary index in the main memory. As you may know, database indices comprise search keys and data pointers. When the size of the database increases, the number of indices also grows. However, to ensure quick search operation, index records are needed to be kept in the memory. If a single-level index is used when the index number is high, it's unlikely to store that index in memory because of its size and multiple accesses. This is where multi-level indexing comes into play. This technique breaks the single-level index into multiple smaller blocks. After breaking down, the outer-level block becomes so tiny that it can easily be stored in the main memory. RELATED: How To Connect To A MySQL Database With Java What Is SQL Index Fragmentation? When any order of the index pages doesn’t match with the physical order in the data file causes SQL index fragmentation. Initially, all the SQL indexes reside fragmentation-free, but as you use the database (Insert/Delete/Alter data) repeatedly, it may cause fragmentation. Apart from database fragmentation, your database can also face other vital issues like database corruption. It can lead to lost data and a harmed website. If you're doing business with your website, it can be a fatal blow for you.
  14. One of the critical parts of most software systems is a database server. A database server is a program used to store and manage data for other software applications. This guide will show you how to install Microsoft SQL Server on Ubuntu 20.04. SQL Server is one of the robust and widely used database servers in IT. A native SQL Server for Linux has been available since 2017, whereas earlier versions of SQL Server were only available for the Windows operating system. Installing SQL Server 2019 To get started, import the Microsoft public GNU Privacy Guard (GnuPG) key to your list of trusted keys so that your system establishes an encrypted and secure connection when downloading SQL Server from Microsoft repositories. Use the command below to import the GnuPG key. wget -qO- https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add - Now you should register the Microsoft SQL Server Ubuntu package repository for SQL Server 2019. This is the repository from which you will be downloading the SQL Server 2019 for Ubuntu Linux. sudo add-apt-repository "$(wget -qO- https://packages.microsoft.com/config/ubuntu/20.04/mssql-server-2019.list)" Note: Replace the version number, i.e. 20.04 in the command above with the LTS version of Ubuntu you are using. For example, if you are using Ubuntu 18.04, replace /ubuntu/20.04 with /ubuntu/18.04. Update your list of repositories before installing SQL Server so that you get the changes for the newly added repository. sudo apt update Finally, install the SQL Server package using the command below. sudo apt install -y mssql-server Configuring Your Server Once the installation is complete, you should proceed to configure your SQL Server instance by setting up the System Administrator (SA) password. Run the command below to start the configuration of your SQL Server. sudo /opt/mssql/bin/mssql-conf setup The first prompt in the configuration will ask you to choose the edition of SQL Server that you want to install. Both paid and free editions are available. This guide will use the SQL Server Express edition, which is option 3. Input your option and press Enter. The system will then present you with a link to the license terms and a prompt to accept the terms. Enter Yes to agree to the terms, and proceed with the installation. The next step is to set the System Administrator (SA) password for your SQL Server instance. Use a strong and secure password to avoid your data from being compromised. You can check the status of your SQL Server service using the systemctl command. systemctl status mssql-server Installing Azure Data Studio There are several ways you can interact with SQL Server databases on Linux, using the command line or via a GUI application; this guide uses the latter. In this section, you will install Azure Data Studio, a lightweight cross-platform database management tool. You can use Azure Data Studio to query, design, and maintain your database on-premises or in the cloud. First of all, download the Azure Data Studio Debian package to your Downloads folder. Download: Azure Data Studio Install the Azure Data Studio DEB package using the following command. sudo apt install ~/Downloads/azuredatastudio-linux-1.30.0.deb Note, the command assumes that the Downloads folder contains the DEB package, so make sure that you are using the correct folder location. RELATED:How Do You Install A DEB File In Ubuntu? Running Azure Data Studio Once the installation above is complete, you can start Azure Data Studio from the terminal. azuredatastudio The Azure Data Studio welcome screen will look like the one below. To connect to a database server, click on the New Connection link under the Start section. You will then be prompted to enter your database connection details. Since the database you are connecting to is located on your PC, use localhost as the server name. The default username is SA. Enter the password that you used when configuring your SQL Server instance. Finally, click the Connect button. LEARN MORE:What Is 127.0 0.1, Localhost, Or A Loopback Address? Your Connection Details screen should be similar to the one shown below. Once connected, the system will list all your databases on the left pane. You can now manage your databases from this screen. Why Use a SQL-Based Database? This guide has shown you how to install Microsoft SQL Server, a relational database system, on Ubuntu Linux. In addition, you installed Azure Data Studio to ease the management of your databases. SQL-based databases are easy to manage, very scalable, and widely used by database administrators. Alternatives to SQL-based databases known as NoSQL databases are now becoming popular, as they use object-oriented schemas for organizing data. Some notable NoSQL databases are Cosmos DB and MongoDB.
  15. Foreign keys allow database administrators to easily identify the different connections that exist within an SQL database management system. SQL performs mathematical operations on data within a database management system. These databases contain different tables that each store data on a specific entity. If you have a car rental database, an entity (or table) in that database will be customers (which will store all the personal data on each customer). These database tables contain rows and columns, where each row hosts a record and each column holds attribute-specific data. In a database management system, each record (or row) should be unique. Primary Keys Though the stipulation is that each record in a table should be distinct, this isn’t always the case. Continuing with the car rental database example, if the database contains two customers that each have the name “John Brown”, a John Brown could be expected to return a Mercedes-Benz that he didn’t rent. Creating a primary key will mitigate this risk. In an SQL database management system, a primary key is a unique identifier that distinguishes one record from another. Therefore, every record in an SQL database management system should have a primary key. Using Primary Keys in a Database To include primary keys in a database management system using SQL, you can simply add it as a normal attribute when creating a new table. So the customers' table will contain four attributes (or columns): CarOwnerID (which will store the primary key) FirstName LastName PhoneNumber RELATED:How To Create A Table In SQL Now every customer record that enters the database will have a unique identification number, as well as a first name, last name, and phone number. The phone number isn’t unique enough to be a primary key, because though it is unique to one person at a time, a person can easily change their number, meaning it would now belong to someone else. A Record With a Primary Key Example /* creates a new record in the customers table */ INSERT INTO Customers VALUES ('0004', 'John', 'Brown', '111-999-5555'); The SQL code above will add a new record to the pre-existing Customers table. The table below shows the new customer table with the two John Brown records. The Foreign Key Now you have primary keys that uniquely distinguish one car renter from another. The only problem is that, in the database, there is no real connection between each John Brown and the car that he rents. Therefore, the possibility of making a mistake still exists. This is where foreign keys come into play. Using a primary key to solve the problem of ownership ambiguity is only achievable if the primary key doubles as a foreign key. What Is a Foreign Key? In an SQL database management system, a foreign key is a unique identifier or a combination of unique identifiers that connect two or more tables in a database. Of the four SQL database management systems in existence, the relational database management system is the most popular one. When deciding which table in a relational database should have a foreign key, you should first identify which table is the subject and which is the object in their relationship. Going back to the car rental database, to connect each customer to the correct car you’ll need to understand that a customer (the subject) rents a car (the object). Therefore, the foreign key should be in the cars table. The SQL code that generates a table with a foreign key is slightly different from the norm. Creating a Table With a Foreign Key Example /* creates a new cars table in the car rental database */ CREATE TABLE Cars ( LicenseNumber varchar(30) NOT NULL PRIMARY KEY, CarType varchar(30) NOT NULL, CustomerID varchar(30) FOREIGN KEY REFERENCES Customers(CustomerID) ); As you can see in the code above, a foreign key has to be explicitly identified as such, along with a reference to the primary key that is being connected to the new table. RELATED:The Essential SQL Commands Cheat Sheet For Beginners To add a record to the new table, you’ll need to ensure that the value in the foreign key field matches the value in the primary key field of the original table. Adding a Record With a Foreign Key Example /* creates a new record in the cars table */ INSERT INTO Cars VALUES ('100012', 'Mercedes-Benz', '0004'); The code above creates a new record in the new Cars table, producing the following result. Cars Table From the table above, you can identify the correct John Brown that rents a Mercedes-Benz by the foreign key in the record. Advance Foreign Keys There are two other ways to use a foreign key in a database. If you look back on the definition of a foreign key above, you’ll find that it says a foreign key can be a unique identifier or a combination of unique identifiers. Going back to the car rental database example, you’ll see that creating a new record (of the same car) each time a customer rents that car, defeats the purpose of the Cars table. If the cars are for sale and are sold to a single customer once, the existing database is perfect; but given that the cars are rentals there's a better way to represent this data. Composite Keys A composite key has two or more unique identifiers. In a relational database, there’ll be instances when the use of a single foreign key won't sufficiently represent the relationships that exist within that database. In the car rental example, the most practical approach is to create a new table that stores the rent details. For the information in the car rental table to be useful, it has to connect to both the car and the customer tables. Creating a Table With Composite Foreign Keys /* creates a CarRental table in the car rental database */ CREATE TABLE CarRental ( DateRented DATE NOT NULL, LicenseNumber varchar(30) NOT NULL FOREIGN KEY REFERENCES Cars(LicenseNumber), CustomerID varchar(30) NOT NULL FOREIGN KEY REFERENCES Customers(CustomerID), PRIMARY KEY (DateRented, LicenseNumber, CustomerID) ); The code above depicts an important point; though a table in an SQL database can have more than one foreign key, it can only have a single primary key. This is because there should only be one unique way to identify a record. It's necessary to combine all three attributes in the table to have a unique key. A customer can rent more than one car on the same day (so CustomerID and DateRented isn’t a good combination) more than one customer can also rent the same car on the same day (so LicenseNumber and DateRented isn’t a good combination). However, the creation of a composite key that tells which customer, what car, and on what day makes an excellent unique key. This unique key represents both a composite foreign key and a composite primary key. Foreign Primary Keys Oh yes, foreign primary keys do exit. Though there’s no official name for it, a foreign key can also be a primary key in the same table. This happens when you create a new table that contains specialized data about an existing entity (or record in another table). Say Fred (who works at the car rental company) is in the company’s database under the employee table. After a few years, he becomes a supervisor and gets added to the supervisor table. Fred is still an employee and will still have the same id number. So Fred’s employee id is now in the supervisor table as a foreign key which will also become a primary key in that table (as it makes no sense to create a new id number for Fred now that he's a supervisor). Now You Can Identify Foreign Keys In SQL Databases Foreign keys connect different tables within an SQL database. From this article, you can see what a foreign key is, how it works, and why it's important to have them in a database. You also understand the basic, and even more complex, forms of foreign keys. If you think foreign keys are interesting, you’re going to have a field day when you start using the project and selection operations to query your SQL databases.
  16. MS SQL Server is a relational database management system (RDMS) developed by Sybase and Microsoft. It is useful in a wide variety of transaction processing, data analytics, and business intelligence platforms. Microsoft has dozens of SQL server editions aimed at different workloads and environments. SQL database corruption affects the consistency of the database and data. It may occur while reading, writing, moving, or processing data. Although there are ways to prevent corruption, if it happens, you’ll need a recovery tool. We’ll look at Recovery Toolbox for SQL Server to recover the corrupted SQL database. Basics of SQL Server Database and Relational Database Management System SQL stands for Structured Query Language. It’s a database language designed for the retrieval and management of data in a relational database. So how do we define a database? In SQL Server, a database consists of database objects. Some of the common objects are: Tables: Store a specific set of structured data. It consists of rows (or records) and columns (or attributes). Columns have a descriptive name and contain a specific data type. Views: An SQL statement that structures the data in a way users find natural or intuitive. You can create a view to either restrict access or summarize the data from various tables, and more. Stored procedures: A pre-compiled collection of SQL statements and command logic stored in the database. With it, you can execute code and modify the data in your tables. Functions: A piece of code that performs a particular task. For example, the Format function formats a value with the specified format. A relational database lets you identify and access data in relation to another piece of data in the database. It organizes data into tables that are linked on the basis of data common to each of them. Each row in the table has a unique ID. The columns of the table hold attributes, and each record has a value for each attribute. To better understand them, think of a library shelf. A database is one shelf with books, and each book is a table. Although each book has its contents, it is linked (or related) to other books by sharing some properties, metadata, or indexes. SQL Server uses two types of databases. The system database is important because they control the entire operation. A user database is created by users and stores SQL data required by those users. The primary database files have a .mdf extension. Log files are the backup files of the SQL Server database. Database Corruption and its Causes In an organization, database corruption does not only bring risks to data but also threatens business revenues. There are multiple reasons for SQL Server database corruption: Hard disk sector errors and corruption and memory failure. Storing database files in compressed folders or volumes. Poor database design related to normalization, constraints, and resource conflicts. Accidental data deletion. File header corruption. Sudden power failure, network component failure, and unexpected system shutdowns. Virus attacks (malware, ransomware, adware etc.). Incorrect functioning of operating system. SQL Recovery Toolbox Step-by-Step Instructions Recovery Toolbox for SQL can fix corrupted SQL Server database from different versions, ranging from MS SQL 2000 to 2019. It also tries to recover valuable data types like table data, views, stored procedures, custom functions, indexes, and more. Here are the step-by-step instructions for recovering a damaged .mdf file. Step 1 Click the Open button and select your source .mdf file through File Explorer open dialog window. Click Next to proceed with the next step. Step 2 You’ll see a prompt dialog window with the message “Do you wish to start recovery?” Click Yes to start the recovery process. The SQL Recovery Toolbox will show you the preview of the data in each category, including system or user tables, views, stored procedures, user-defined functions, and data types. For example, when you select the User Tables category, you’ll see the list of all user tables and their content in the bottom part of the window. Click Next to continue. Step 3 In this step, you can export the data from the corrupt database. There are two methods: Save script to disk and Execute script on database. In the former, the tool will create a directory “Recovered source_file_name” in the destination folder of your choice. It contains scripts (the numbering sequence is important for data files) and the “Install.bat” file (type in the server name, username, and password in the CMD window). In the latter, specify the details in the Connection String text. Through the “Data Link Properties” dialog box, type in the provider name and authentication details. With this, the SQL Recovery Toolbox will directly execute the script in the database. Since the database can contain gigabytes of data, you can split the file into multiple parts according to your needs. Specify a number in Split into parts with size. Click Next to proceed. Step 4 Although this is an optional step, Recovery Toolbox has a checkbox near objects under all categories. With this option, you can instruct the tool to retrieve the data you wish to save from the corrupted database. You can choose from the type of database, categories, or database objects. Click Next to continue. Step 5 SQL Recovery Toolbox will start the recovery process, and you can track the progress in real-time. This process naturally depends on the source file size and CPU performance. When data export is done, you can see the final summary in the current session. You’ll see results like tables created, views, indexes recovered, read errors count, time spent, and more. Recover Data From the Corrupted SQL Server Database Recovery Toolbox for SQL is a simple tool designed to repair and recover data from corrupted databases in MS SQL Server format (.mdf). The app performs a detailed analysis of the SQL Server database and allows you to preview, view, and recover the data from database objects. All recovered data from .mdf files can either be transferred to a new database (in another PC) or SQL script files. Try out the app and see if it fits your needs. The tool is available for a reasonable price of $99 (personal use) or $149 (business use). We hope you like the items we recommend and discuss! MUO has affiliate and sponsored partnerships, so we receive a share of the revenue from some of your purchases. This won’t affect the price you pay and helps us offer the best product recommendations.
  17. System administrators often use monitoring tools such as Zabbix to keep an eye on servers, virtual machines, devices connected to their network, and more. Zabbix is a great tool that provides a graphical interface to control and manage these services efficiently. But the installation process of Zabbix on Linux is quite long and confusing. This article will demonstrate how to easily install Zabbix and its prerequisites on a system running Ubuntu or Debian. Prerequisites for Zabbix To successfully install Zabbix on your desktop or server, you'll need: A root account MySQL database PHP Apache server Step 1: Install Apache and PHP Since Zabbix is written in PHP, you will have to download PHP and Apache server on your machine. Add the following PPA repository to your system using add-apt-repository: sudo add-apt-repository ppa:ondrej/php Launch the terminal and update your system's repository list using APT: sudo apt update Upgrade the installed packages to ensure that no outdated packages are present on your computer. sudo apt upgrade Next, download the necessary packages related to Apache and PHP: sudo apt install apache2 php php-mysql php-ldap php-bcmath php-gd php-xml libapache2-mod-php After downloading the packages, the system will automatically configure the Apache service to start during boot. Check whether the service is currently running on your machine using systemctl: systemctl status apache2 If the status displays active (running), then everything's fine. However if not, you'll have to manually start the service. systemctl start apache2 systemctl stop apache2 systemctl restart apache2 Step 2: Install and Set Up MySQL Database Issue the below-given command in the terminal to install MySQL. sudo apt install mysql-server mysql-client Now, you have to install the database on your Ubuntu machine. To make your work easier, MySQL provides an installation script that automatically installs the database for you. Launch the terminal and type: mysql_secure_installation Type the root user password and press Enter. The script will ask you some questions to configure the database installation such as: Set root password? Remove anonymous users? Disallow root login remotely? Remove test database and access to it? Reload privilege tables now? Type y and press Enter for all the questions. Now it's time to create a new database for Zabbix. Launch the terminal and enter the following command: mysql -u root -p Execute the following database commands to create a new database and grant appropriate privileges to the new user. Make sure to replace password in the second command with a strong password of your choice. $ CREATE DATABASE zabbixdb character set utf8 collate utf8_bin; $ CREATE USER 'zabbix'@'localhost' IDENTIFIED BY 'password'; $ GRANT ALL PRIVILEGES ON zabbixdb.* TO 'zabbix'@'localhost' WITH GRANT OPTION; $ FLUSH PRIVILEGES; Once done, quit the MySQL shell by typing: quit; Step 3: Download and Install Zabbix To install Zabbix on Ubuntu and Debian, download the DEB package from the official Zabbix repository. Use wget to download the package file: wget https://repo.zabbix.com/zabbix/5.0/debian/pool/main/z/zabbix-release/zabbix-release_5.0-1+buster_all.deb Install the downloaded package using APT. sudo apt ./zabbix-release_5.0-1+buster_all.deb Next, download the Zabbix server, agent packages, and the web frontend. sudo apt install zabbix-server-mysql zabbix-frontend-php zabbix-agent Now, create and load the Zabbix database schema. zcat /usr/share/doc/zabbix-server-mysql/create.sql.gz | mysql -u root -p zabbix Step 4: Configure the Zabbix Server Although you have installed Zabbix on your system, it is not configured to use the database you created before. Open the Zabbix configuration file located at /etc/zabbix using your favorite Linux text editor. nano /etc/zabbix/zabbix_server.conf Now, locate the following lines in the configuration file and change the hostname, username, and password. DBHost=localhost DBName=zabbixdb DBUser=zabbix DBPassword=password Make sure to replace password with a strong password of your choice. RELATED:How To Create A Strong Password That You'll Not Forget Step 5: Configure the Apache Server Before moving forward, you need to make some changes to the Zabbix Apache configuration file. To do that, reload the Apache server using systemctl first. systemctl reload apache2 Open the configuration file using nano or any other text editor. nano /etc/zabbix/apache.conf Find the line php_value date.timezone <time_zone> and replace <time_zone> with the time zone corresponding to your geographical location. Step 6: Finishing Configuration Now that you have finished tweaking the files, it is time to start the services and set up Zabbix graphically. Restart the Apache service using systemctl. systemctl restart apache2 Start the Zabbix server and agent by typing the following command: systemctl start zabbix-server zabbix-agent Enable the Zabbix services from the command line. systemctl enable zabbix-server zabbix-agent Verify if the Zabbix server is running on your system using the systemctl status command. systemctl status zabbix-server Proceed if the status displays active in green font. Step 7: Tweaking the Firewall With UFW To ensure that Zabbix works properly on your system, you'll have to open ports 80 and 443 on your network. On Linux, UFW is a great utility that will help you in configuring firewalls and managing ports. Open ports 80 and 443 by typing the following command: ufw allow 80/tcp ufw allow 443/tcp Reload your firewall to save the changes. ufw reload Step 8: Configure Zabbix Frontend Launch any web browser on your Linux system and head over to the following address: http://localhost/zabbix If you've installed Zabbix on a Linux server, replace localhost with the IP address of the server. The browser will display the Zabbix Welcome page. Click on the Next Step button to continue. Now, Zabbix will check the prerequisites required for the application. If you find a missing package, go ahead and install it using the terminal. Once done, click Next Step. Enter the database password entered in the configuration file before. Then select Next Step. The system will ask you for information related to the server. Enter an appropriate server name and proceed by clicking on Next Step. Zabbix will quickly summarize all the configurations and settings that you've done. Review these settings and click on Next Step if everything looks good. The installation process will now begin. Select Finish once Zabbix has finished installing.
  18. Standard Query Language (SQL) is a mathematically based language that is used to query databases. There are several different types of database management systems in existence; SQL is used with the relational database management system. The relational database management system (or relational model) deals with the mathematical concept of a relationship and is physically represented as a table. These tables are represented by rows and columns, where the rows contain records and the columns contain attributes. Two special types of operations can be carried out on the rows and columns in a table---project and selection. Project Operation The project SQL operation allows users of the relational model to retrieve column-specific data from a table. This data is then used to create a new table that is dedicated to the information that the user would like to see. So, if you had a relational model consisting of nine different columns but you only need the name and the date of birth for each individual in the table, you would use a project operation to retrieve this data. Project Operation Structure Select column_name from table_name The project operation has a pretty straightforward structure, consisting of exactly four parts. The Select keyword, which should always begin with a capital letter. The column name/s, if there is more than one each should be separated from the other with a comma. The from keyword, which is all lower case. The table name. Using the Project Operation on a Table Imagine a furniture store that has a relational database management system. In this database, a customer table that stores all the data we have on each customer. In the customer table are nine fields: CustomerID FirstName LastName DOB PhoneNumber Email CustomerAddress City Country Customer Table Example RELATED:How To Create A Table In SQL One day the customer relations officer comes up with a brilliant idea that is aimed at improving customer relationship. The idea is to get the software developer to create a simple automated program that will email each customer on their birthday. So now you need exactly four fields of data from our customer table: FirstName and LastName, to personalize the email; DOB, to know the date to schedule the email on; and Email. Using the Project Operation Example Select FirstName, LastName, DOB, Email from Customer The code above will effectively generate a new table that can be used to create a simple program. The table that was generated can be seen below. Customers Birthday Table Example In this instance, the project operation proves to be very useful because of two reasons. It protects the privacy of the customers and provides the information that is needed. The customers trust the store with their information, and by providing only the data that is essential for a specific member of staff to carry out their duties, that trust is protected. The Similarities Between the Project and Selection Operation The selection operation targets records (rows), or specific entities in a relational database. The structure of a selection operation is very similar to that of the project operation; in fact, there is one specific operation that can be used as a project or a select operation because it returns the same result in either case. This operation is known as a select all query and what it does is produce all the data that is in a table. Select All Example Select * from table_name If you were to use the query above as a project operation you would say that you are selecting all the attributes (columns) in a relational database. However, if you were to use the example above as a selection operation then you would be selecting all the records (rows) in a relational database. The point is that regardless of the operation type, you will always get the same result. Using Select All on Customers Table Select * from Customers The code above will simply regenerate the original Customers table, which can be seen under the “customer table example” above. The Selection Operation What makes an average selection operation different from a project operation is the “where” property. The “where” property makes it possible for the selection operation to target records, that meet a specific criterion. RELATED:The Most Important SQL Commands Any Programmer Should Know Selection Operation Structure Example Select * from table_name where column_name = value Using the Selection Operation Our furniture store has branches all over the country, and all of these branches are connected to the main database. From this database, the managing director was able to see that a branch in a specific city is not performing as well as the others. After some brainstorming, the decision was made to create a “bring a friend” initiative. The idea here is for customers from the poorly performing branch to be emailed a coupon, and if they brought a friend who purchased an item that coupon can be used with a 10% discount off their next purchase. The database administrator would now need to generate a new table that contains only customers from the target city. Selecting All Customers From Kingston Example Select * from Customers where City='Kingston'; The example above would generate the following table. Using the Project and Selection Operations Together The table created above using the selection operation got the job done; this gives you a record of all customers that are in Kingston city. The only problem is that you have now thrown the customers' privacy right out the door. The staff member that is going to be emailing these coupon codes to our Kingston customers does not need access to their full address, phone number, or customer ID. Using the project and selection operation together solves this little problem. Using the Project and Selection Operation Example Select FirstName, LastName, Email from Customers where City='Kingston'; The query above will generate the following table. As you can see from the table above only the information that is necessary to carry out this particular task is available. Now You Can Use the Project and Selection Operations Using the basic structure of a relational database management system gives you can use the project and selection operations separately and together. This is just one of the many ways to interrogate database tables.
  19. Much of the power of relational databases comes from filtering data and joining tables together. This is why we represent those relations in the first place. But modern database systems provide another valuable technique: grouping. Grouping allows you to extract summary information from a database. It lets you combine results to create useful statistical data. Grouping saves you from writing code for common cases such as averaging lists of figures. And it can make for more efficient systems. What Does the GROUP BY Clause Do? GROUP BY, as the name suggests, groups results into a smaller set. The results consist of one row for each distinct value of the grouped column. We can show its usage by looking at some sample data with rows that share some common values. The following is a very simple database with two tables representing record albums. You can set up such a database by writing a basic schema for your chosen database system. The albums table has nine rows with a primary key id column and columns for name, artist, year of release, and sales: +----+---------------------------+-----------+--------------+-------+ | id | name | artist_id | release_year | sales | +----+---------------------------+-----------+--------------+-------+ | 1 | Abbey Road | 1 | 1969 | 14 | | 2 | The Dark Side of the Moon | 2 | 1973 | 24 | | 3 | Rumours | 3 | 1977 | 28 | | 4 | Nevermind | 4 | 1991 | 17 | | 5 | Animals | 2 | 1977 | 6 | | 6 | Goodbye Yellow Brick Road | 5 | 1973 | 8 | | 7 | 21 | 6 | 2011 | 25 | | 8 | 25 | 6 | 2015 | 22 | | 9 | Bat Out of Hell | 7 | 1977 | 28 | +----+---------------------------+-----------+--------------+-------+ The artists table is even simpler. It has seven rows with id and name columns: +----+---------------+ | id | name | +----+---------------+ | 1 | The Beatles | | 2 | Pink Floyd | | 3 | Fleetwood Mac | | 4 | Nirvana | | 5 | Elton John | | 6 | Adele | | 7 | Meat Loaf | +----+---------------+ You can understand various aspects of GROUP BY with just a simple data set such as this. Of course, a real-life data set would have many, many more rows, but the principles remain the same. Grouping by a Single Column Let’s say we want to find out how many albums we have for each artist. Start with a typical SELECT query to fetch the artist_id column: SELECT artist_id FROM albums This returns all nine rows, as expected: +-----------+ | artist_id | +-----------+ | 1 | | 2 | | 3 | | 4 | | 2 | | 5 | | 6 | | 6 | | 7 | +-----------+ To group these results by the artist, append the phrase GROUP BY artist_id: SELECT artist_id FROM albums GROUP BY artist_id Which gives the following results: +-----------+ | artist_id | +-----------+ | 1 | | 2 | | 3 | | 4 | | 5 | | 6 | | 7 | +-----------+ There are seven rows in the result set, reduced from the total nine in the albums table. Each unique artist_id has a single row. Finally, to get the actual counts, add COUNT(*) to the columns selected: SELECT artist_id, COUNT(*) FROM albums GROUP BY artist_id +-----------+----------+ | artist_id | COUNT(*) | +-----------+----------+ | 1 | 1 | | 2 | 2 | | 3 | 1 | | 4 | 1 | | 5 | 1 | | 6 | 2 | | 7 | 1 | +-----------+----------+ The results group two pairs of rows for the artists with ids 2 and 6. Each has two albums in our database. RELATED: The Essential SQL Commands Cheat Sheet For Beginners How to Access Grouped Data With an Aggregate Function You may have used the COUNT function before, particularly in the COUNT(*) form as seen above. It fetches the number of results in a set. You can use it to get the total number of records in a table: SELECT COUNT(*) FROM albums +----------+ | COUNT(*) | +----------+ | 9 | +----------+ COUNT is an aggregate function. This term refers to functions that translate values from multiple rows into a single value. They are often used in conjunction with the GROUP BY statement. Rather than just count the number of rows, we can apply an aggregate function to grouped values: SELECT artist_id, SUM(sales) FROM albums GROUP BY artist_id +-----------+------------+ | artist_id | SUM(sales) | +-----------+------------+ | 1 | 14 | | 2 | 30 | | 3 | 28 | | 4 | 17 | | 5 | 8 | | 6 | 47 | | 7 | 28 | +-----------+------------+ The total sales shown above for artists 2 and 6 are their multiple albums’ sales combined: SELECT artist_id, sales FROM albums WHERE artist_id IN (2, 6) +-----------+-------+ | artist_id | sales | +-----------+-------+ | 2 | 24 | | 2 | 6 | | 6 | 25 | | 6 | 22 | +-----------+-------+ Grouping by Multiple Columns You can group by more than one column. Just include multiple columns or expressions, separated by commas. The results will group according to the combination of these columns. SELECT release_year, sales, count(*) FROM albums GROUP BY release_year, sales This will typically produce more results than grouping by a single column: +--------------+-------+----------+ | release_year | sales | count(*) | +--------------+-------+----------+ | 1969 | 14 | 1 | | 1973 | 24 | 1 | | 1977 | 28 | 2 | | 1991 | 17 | 1 | | 1977 | 6 | 1 | | 1973 | 8 | 1 | | 2011 | 25 | 1 | | 2015 | 22 | 1 | +--------------+-------+----------+ Note that, in our small example, just two albums have the same release year and sales count (28 in 1977). Useful Aggregate Functions Aside from COUNT, several functions work well with GROUP. Each function returns a value based on the records belonging to each result group. COUNT() returns the total number of matching records. SUM() returns the total of all values in the given column added up. MIN() returns the smallest value in a given column. MAX() returns the largest value in a given column. AVG() returns the mean average. It’s the equivalent of SUM() / COUNT(). You can also use these functions without a GROUP clause: SELECT AVG(sales) FROM albums +------------+ | AVG(sales) | +------------+ | 19.1111 | +------------+ Using GROUP BY With a WHERE Clause Just as with a normal SELECT, you can still use WHERE to filter the result set: SELECT artist_id, COUNT(*) FROM albums WHERE release_year > 1990 GROUP BY artist_id +-----------+----------+ | artist_id | COUNT(*) | +-----------+----------+ | 4 | 1 | | 6 | 2 | +-----------+----------+ Now you have only those albums released after 1990, grouped by artist. You can also use a join with the WHERE clause, independently from the GROUP BY: SELECT r.name, COUNT(*) AS albums FROM albums l, artists r WHERE artist_id=r.id AND release_year > 1990 GROUP BY artist_id +---------+--------+ | name | albums | +---------+--------+ | Nirvana | 1 | | Adele | 2 | +---------+--------+ Note, however, that if you try to filter based on an aggregated column: SELECT r.name, COUNT(*) AS albums FROM albums l, artists r WHERE artist_id=r.id AND albums > 2 GROUP BY artist_id; You’ll get an error: ERROR 1054 (42S22): Unknown column 'albums' in 'where clause' Columns based on aggregate data are not available to the WHERE clause. Using the HAVING Clause So, how do you filter the result set after a grouping has taken place? The HAVING clause deals with this need: SELECT r.name, COUNT(*) AS albums FROM albums l, artists r WHERE artist_id=r.id GROUP BY artist_id HAVING albums > 1; Note that the HAVING clause comes after the GROUP BY. Otherwise, it’s essentially a simple replacement of the WHERE with HAVING. The results are: +------------+--------+ | name | albums | +------------+--------+ | Pink Floyd | 2 | | Adele | 2 | +------------+--------+ You can still use a WHERE condition to filter the results before the grouping. It will work together with a HAVING clause for filtering after the grouping: SELECT r.name, COUNT(*) AS albums FROM albums l, artists r WHERE artist_id=r.id AND release_year > 1990 GROUP BY artist_id HAVING albums > 1; Only one artist in our database released more than one album after 1990: +-------+--------+ | name | albums | +-------+--------+ | Adele | 2 | +-------+--------+ Combining Results With GROUP BY The GROUP BY statement is an incredibly useful part of the SQL language. It can provide summary information of data, for a contents page, for example. It is an excellent alternative to fetching large quantities of data. The database handles this extra workload well since its very design makes it optimal for the job. Once you understand grouping and how to join multiple tables, you’ll be able to utilize most of the power of a relational database.
  20. Two Netflix movie screeners appeared online a few hours ago, way ahead of their planned release date. Pirate release group EVO published advance copies of 'The Power of the Dog' and 'The Guilty,' which subsequently leaked online. The releases are not typical award screeners but appear to be film festival screeners instead. Pirated copies of movies leak all year round, usually after they come out on streaming services or through digital release. That by itself is nothing special. Screener releases are a notable exception to this rule. These are advance copies of recent movies that are generally sent out to critics and awards voters. The screeners are supposed to remain private but every year a few end up in the hands of pirates. These leaked copies are then published online, sometimes months ahead of their official release dates. ‘The Power of the Dog’ Screener That’s exactly what happened to two Netflix titles over the past hours. While ‘screener season’ usually starts around December, a leaked copy of the Netflix movie “The Power of the Dog” was published on Sunday. The film, starring Benedict Cumberbatch and Kirsten Dunst, is officially scheduled to copes on December 1st. However, over the past few hours, tens of thousands of pirates already grabbed an early copy. The leak was published by the pirate release group EVO, which also released the first screeners last year. The source is an online screener, which has become the new standard in recent years. The release is tagged as a ‘WEBSCREENER’ which confirms that the copy was obtained from a screener copy made available over the Internet. While some had hoped that these online releases would be easier to secure, the current leak clearly shows that there are weak spots. The.Power.of.the.Dog.2021.WEBSCREENER.XviD.AC3-EVO TorrentFreak contacted EVO to found out more about the source for this screener, but the group said that it can’t say anything about the ‘festival’ it’s connected to due to security reasons. ‘The Guilty’ Screener The release group did mention, however, that another movie would be leaked soon. And indeed, after a few hours, another prominent Netflix screener was posted online. This time it’s the Jake Gyllenhaal film “The Guilty.” The.Guilty.2021.WEBSCREENER.XviD.AC3-EVO has since been republished on various pirate sites. The movie officially premieres in early October, which means that pirates can see it earlier than paying subscribers. These screeners appear to be too early for the Academy Awards. And since EVO suggested that the leaks are sourced from festival screeners, we have to look elsewhere. Film Festival Interestingly, both “The Power of The Dog” and “The Guilty” are in the screener lineup of the annual Vancouver International Film Festival (TIFF). This festival started last Thursday and is currently ongoing. Like many other festivals, TIFF hosts both in-person and online screenings. The latter has become increasingly common during the COVID pandemic. While we can’t know for sure where these leaks come from, it’s pretty clear that screeners can still leak when festivals and award shows move to digital screeners only, which is the case for the Oscars as well. “Let’s hope the season starts,” EVO told us, referring to the traditional ‘pirate screener season.’ However, the group didn’t say whether more films are expected to leak anytime soon.
  21. The world's oldest active torrent file turns 18 years old this month and it's still being seeded by dozens of people. "The Fanimatrix" torrent was published in 2003 when BitTorrent was still a relatively new protocol. At the time, the torrent's creator saw it as the only affordable option to share the Matrix fan film with the world. BitTorrent is an excellent distribution mechanism but, for a file to live on, at least one person has to keep sharing it. This means that most torrents eventually die after the public loses interest. However, some torrents seem to live on forever. The Fanimatrix The oldest surviving torrent we can identify is a copy of the Matrix fan film “Fanimatrix.” The torrent was created in September 2003, which means that it will turn 18 this month. A remarkable achievement. The film was shot by a group of New Zealand friends. With a limited budget of just $800, nearly half of which was spent on a leather jacket, they managed to complete the project in nine days. While shooting the film was possible with these financial constraints, sharing it with the world was a bigger challenge. At the time there were no free video-sharing services and YouTube had yet to be invented. No Money For Distribution Hosting the film on a private server wasn’t an option either. Bandwith was still very expensive, especially in New Zealand. If the project was to be a success, the friends would have to pay many thousands of dollars extra to distribute it. This is when one of the friends, Sebastian Kai Frost, went looking for other options. Frost had a bit part in the film and also operated as the ‘IT-guy’. After searching for solutions, he eventually stumbled upon a new technology called BitTorrent. “It looked promising because it scaled such that the more popular the file became, the more the bandwidth load was shared. It seemed like the perfect solution,” Frost told us earlier. BitTorrent To the Rescue This was exactly what was needed to get the Fanimatrix published worldwide. So, after Frost got the green light from the rest of the crew he created a torrent on September 28, 2003. To ensure that everything ran smoothly, he also ran his own tracker from a Linux box. Fanimatrix The Fanimatrix turned out to be a great success. In the first week alone, 70,000 people grabbed a copy of the film. This is quite an achievement, especially when considering that BitTorrent wasn’t as widely known or easy to use back then. With BitTorrent, the film crew easily saved hundreds of thousands of dollars in distribution costs. It was the perfect use case for how the technology could help independent creators. Preserving Internet History Today, there are plenty of free ‘instant streaming’ options available that make BitTorrent look silly. However, it is great to see that the Fanimatrix is still being shared after all these years. It’s part of Internet history now. And with the fourth installment of The Matrix scheduled to premiere later this year, the project may gain some extra traction as well. TorrentFreak spoke to Frost this week who doubled down on his earlier promise to keep the Fanimatrix site and the torrent running for as long as possible. He’s even taken some precautions, in case of an early departure from this world. The original Fanimatrix site went offline for a while, but Frost restored it years ago and plans to keep it online for as long as possible. The crew is also considering a special celebration in two years, when the two-decade mark is passed.
  22. One of South Africa’s biggest cryptocurrency exchanges, Luno, has confirmed that it has started restricting withdrawals by clients. The exchange insists the limits are meant to “act as a deterrent for illicit actors moving large amounts of funds within the crypto ecosystem.” Transfers From Luno to Binance Blocked However, despite this acknowledgement, Luno has so far refused to explain how the exchange sets the so-called “dynamic risk-based limits.” According to a report, the limits — which are separate from the send limits that appear on Luno’s website — were discovered by one of the exchange’s clients. The discovery became apparent to the client when their attempt to transfer crypto assets from a Luno account to a Binance wallet failed. When approached for answers, Luno explained to the client(s) that the limits had been imposed in order to “protect our customers and in an effort to comply with best practices in anti-financial crime and anti-fraud.” Furthermore, the exchange told the client that “the limits are dynamic in nature and are calculated based on our overall customer risk scoring, the limits may differ from customer to customer.” However, Luno told to the affected client that the exchange “does not disclose how [the] send limits are calculated on an individual level.” Luno Customers Unable to Influence Their Risk Score In the meantime, the report quotes Marius Reitz, general manager for Luno Africa, explaining why and how the wider concept of a risk-based approach is being used to determine the limits for each client. He said: As part of the wider concept of a risk-based approach mentioned, for instance in the Financial Intelligence Centre Act (FICA), customer risk profiles are designed and scored based on a multitude of different data points. Reitz adds that while customers are not in a position to influence their risk score, they can still “optimise their risk position by keeping their account information up to date, enabling safety features on their account, and generally keeping their account secure.” When asked about speculation that the exchange has started implementing these dynamic risk-based limits at the request of the financial surveillance department (Finsurv), Reitz denied this. Instead, the general manager asserts that Luno is doing this because the exchange “takes the utmost care to keep our financial crime measures as confidential as possible to ensure they remain effective.”
  23. The chairman of India’s Parliamentary Standing Committee on Finance explains that cryptocurrency legislation in India will be “distinct and unique.” He added, “We have to balance stability and growth but we recognize how important this whole area of crypto is.” Lawmaker Provides an Update on Crypto Legislation Jayant Sinha, a lawmaker of the ruling Bharatiya Janata Party, talked about India’s cryptocurrency legislation Wednesday at an event organized by the Blockchain and Crypto Assets Council (BACC) of Internet and Mobile Association of India (IAMAI). Sinha, who is the chairman of India’s Parliamentary Standing Committee on Finance, explained that it is not possible for India to adopt the cryptocurrency policies used in advanced economies because the nation still does not have a full capital account convertibility. He clarified that India’s crypto policies will not follow the U.S., Japan, or El Salvador, the country which made bitcoin legal tender this week. The lawmaker elaborated: Our solution will have to be distinct and unique simply because of our unique circumstances. We have to balance stability and growth but we recognize how important this whole area of crypto is. Furthermore, he noted that the committee will consider crypto legislation with national security in mind, adding: “We have to be very watchful about what happens to these crypto assets and cryptocurrencies. Use of these kinds of crypto instruments in terror financing and for domestic security threats is something we have to be mindful of.” On Tuesday, a former deputy governor of the Reserve Bank of India (RBI), R. Gandhi, said that crypto must be regulated as an asset or commodity in India and governed by existing laws. He explained that “Once cryptocurrencies are accepted, rules governing commodity exchanges could apply and the coins could be used to pay for goods and services,” Bloomberg conveyed, and quoted him as saying, “Then automatically people can start buying, selling and holding.” According to a recent report, the Indian government is planning to regulate crypto assets as commodities and by use cases. Previously, there were reports of the government planning to ban all cryptocurrencies like bitcoin, allowing only central bank digital currencies (CBDCs) to be issued by the RBI. Meanwhile, the central bank is planning to unveil a digital rupee model by the end of the year.
  24. The parliament in Kyiv has passed legislation determining the rules for crypto-related operations in Ukraine. The law “On Virtual Assets” recognizes cryptocurrencies as intangible goods while denying them the status of legal tender. It also regulates the activities and obligations of crypto businesses. Ukraine Legalizes Crypto Activities, Defines Virtual Assets Ukraine’s Verkhovna Rada, the country’s parliament, has adopted the law “On Virtual Assets” on second and final reading. The legislation regulates operations with cryptocurrencies in the Ukrainian jurisdiction. Deputies passed the bill with a large majority of 276 votes out of 376 present MPs, with only six voting against the motion. The long-awaited law will enter into force after lawmakers approve amendments to the country’s tax code pertaining to the taxation of cryptocurrency transactions. The Ukrainian legislature is yet to vote on these changes, Forklog noted in its report on the development. Provisions of the new law recognize virtual assets as intangible goods, which can be secured and unsecured. However, cryptocurrencies are not accepted as a legal means of payment in Ukraine and their exchange for other goods or services will not be allowed. The law also introduces the term “financial virtual assets” that must be issued by entities registered in Ukraine. In case these assets are backed by currencies, they will be regulated by the National Bank of Ukraine (NBU), the country’s central bank. If the underlying asset is a security or a derivative, the National Securities and Stock Market Commission (NSSMC) will be the main regulator. Crypto market participants will be able to independently determine the value of virtual assets, open bank accounts to settle transactions, and seek judicial protection for associated rights. Service providers are required to abide by the country’s anti-money laundering regulations and prevent attempts to finance terrorism using their platforms, just like traditional financial institutions. Current Ukrainian authorities have maintained a positive attitude towards the country’s growing crypto industry, confirmed by representatives of the executive power this week. During a visit to the U.S., President Volodymyr Zelensky highlighted the importance of launching a legal digital assets market which he described as a “development vector” of the nation’s digital economy. Ukraine’s Minister of Digital Transformation, Mykhailo Fedorov, added the country is working to become an attractive jurisdiction for crypto companies. The draft law “On Virtual Assets” was voted on first reading in the Rada last December. After introducing a number of changes, lawmakers presented a revised version of the document in June of this year. Following criticism from various regulators, including NBU and NSSMC, the bill was once again amended with the authors taking into account concerns expressed by other government institutions.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.