This package tries to connect much of the TF2 ecosystem to explore basic statistics and open it up to statistical models available in R. The main driving force behind this project is actually the abundance of stats available due to automatically generated logs of each match. However, the querying this information is easiest to do right after a match or on a case by case basis. For insight on a whole period of time, over certain play styles, or with a certain level of competition, the process is tedious and prone to error.
A new version of “matconv” has been released and can be found here. In this update of my Matlab to R converter, I added function converters to take matLab function calls and turn them into R calls. This can be manipulated for your own project and a base dictionary largely converted from this reference text can be found at ‘inst/extData/HiebelerDict.txt’. The following is the vignette for function calls that uses some of the dictionary lines in this file to illustrate all the different syntax and flags that can be used.
We have loads of observational data on monarch butterflies’ flights. We have a lot of studies doing experiments on what governs direction, speed or length of a butterflies’ flight. What is missing is a predictive model to tie these parameters with the observational data. This gap is magnified for lesser known pests that directly impact the yields of crops. The R package, biosplit tries to fill this gap using a biological and physical model. Simple parameters can be fed into the program and a year’s worth of migration is simulated using a passive dispersal model called HYSPLIT from NOAA. Some of the output that can be generated can be seen in this recently published paper.
I have released a matLab / Octave code translator on my github page, here. Engineering seems to be an untapped market for R. Experiments across disciplines require similar data transformations but MatLab often pushes groups against sharing code. One reason many groups choose to forgo a switch in programming languages is that there is a large code base that would need to be converted which is tedious at best and fatally time consuming at worst.
Using the raster package to query geospatial data sets is easy. You can use coordinates, a station list, an extent object, the grid row or col and logical subseting. However, what if you want a more complex object to query the data? What if you wanted a non-standard state region, the area between two lakes and a mountain, the area between two amusement parks, or the region in a flood plain between two evacuation centers? Initially, you would think the answer is external data. This could involve changing the projection, changing the grid size, masking some areas or other data manipulations. At the very least you are tying your analysis to another outside data source that could change over time and has a reputation on its own.
Vapply is often the overlooked data analysis tool in the apply family. lapply gives a developer the advantage of a list output while sapply seemingly handles much of the other cases. The others are shown as having very specific use cases that one should only learn if you can’t find another tool to do the job. vapply is often introduced after these other functions and is shown to require a much more rigorous implementation. Many don’t want the call to have a uniform ouput with a definite size. For those interested in the speed enhancements, many overlook this step and go straight for vectorizing the process.
This blog is going to have updates on my various projects for R and other platforms. The main project is developing my master’s thesis into an actual piece of software. For my other projects that aren’t open source or are work related I will post some of my thought processes and ramblings in the form of tutorials and package highlights. Hopefully, this will include extend some of them or making new packages if I see a gap in tools.