All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm continuing to work on dashboards to report on user activity on our application. Going through the knowledgebase, bootcamp slides, and google, trying to determine the best route to report on the v... See more...
I'm continuing to work on dashboards to report on user activity on our application. Going through the knowledgebase, bootcamp slides, and google, trying to determine the best route to report on the values in logs files such as this one.  The dashboards I am creating is showing activity in the various modules, what values are getting select and what is being pulled up. I looked at spath and mvexpand and wasn't getting the results I was hoping for, might have been I wasn't formatting the search correctly and also how green myself and work is to Splunk. Creating field extractions has worked for the most part to pull the specific values I wanted to report but further on, I'm finding incorrect values being pulled in. Below is one such event that's been sanitized and it's in valid JSON format.  I'm trying to do a table event showing the userName, date and time, serverHost, SparklingTypeId, PageSize, and PageNumber. The other values not so much.  Is spath and MV expand along with eval statements the best course? I was using field extractions in a couple other modules but then found incorrect values were being added.  {"auditResultSets":null,"schema":"com","storedProcedureName":"SpongeGetBySearchCriteria","commandText":"com.SpongeGetBySearchCriteria","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@SpongeTypeId","value":null},{"name":"@CustomerNameStartWith","value":null},{"name":"@IsAssigned","value":null},{"name":"@IsAssignedToIdIsNULL","value":false},{"name":"@SpongeStatusIdsCSV","value":",1,"},{"name":"@RequestingValueId","value":null},{"name":"@RequestingStaffId","value":null},{"name":"@IsParamOther","value":false},{"name":"@AssignedToId","value":null},{"name":"@MALLLocationId","value":8279},{"name":"@AssignedDateFrom","value":null},{"name":"@AssignedDateTo","value":null},{"name":"@RequestDateFrom","value":null},{"name":"@RequestDateTo","value":null},{"name":"@DueDateFrom","value":null},{"name":"@DueDateTo","value":null},{"name":"@ExcludeCustomerFlagTypeIdsCSV","value":",1,"},{"name":"@PageSize","value":25},{"name":"@PageNumber","value":1},{"name":"@SortColumnName","value":"RequestDate"},{"name":"@SortDirection","value":"DESC"},{"name":"@HasAnySparkling","value":null},{"name":"@SparklingTypeId","value":null},{"name":"@SparklingSubTypeId","value":null},{"name":"@SparklingStatusId","value":null},{"name":"@SparklingDateFrom","value":null},{"name":"@SparklingDateTo","value":null},{"name":"@SupervisorId","value":null},{"name":"@Debug","value":null}],"serverIPAddress":"255.255.000.000","serverHost":"WEBSERVER","clientIPAddress":"255.255.255.255","sourceSystem":"WebSite","module":"Vendor.Product.BLL.Community","accessDate":"2025-04-30T15:34:33.3568918-06:00","userId":3231,"userName":"PeterVenkman","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.Community.Operations.SpongeSearch","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.Community.SpongeManager","method":"SpongeSearch"}]}  
limit=N is the same as limit=topN And the bottomN appeared in 8.1, which was several years ago
this was perfect, thank you!
Scratch that first line.. Use this index="ifi" appEnvrnNam="ANY" msgTxt="Standardizess SUCCEEDED - FROM:*"
  index="ifi" appEnvrnNam="ANY" msgTxt="StandardizedAddress SUCCEEDED*" | eval _raw="Standardizedss SUCCEEDED - FROM: {\"Standardizedss \":\"SUCCEEDED\",\"FROM\":{\"Address1\":\"123 NAANNA SAND RD\... See more...
  index="ifi" appEnvrnNam="ANY" msgTxt="StandardizedAddress SUCCEEDED*" | eval _raw="Standardizedss SUCCEEDED - FROM: {\"Standardizedss \":\"SUCCEEDED\",\"FROM\":{\"Address1\":\"123 NAANNA SAND RD\",\"Address2\":\"\",\"City\":\"GREEN\",\"County\":null,\"State\":\"WY\",\"ZipCode\":\"44444-9360\",\"Latitude\":null,\"Longitude\":null,\"IsStandardized\":true,\"AddressStandardization\":1,\"AddressStandardizationType\":0},\"RESULT\":1,\"AddressDetails\":[{\"AssociatedName\":\"\",\"HouseNumber\":\"123\",\"Predirection\":\"\",\"StreetName\":\"NAANNA SAND RD\",\"Suffix\":\"RD\",\"Postdirection\":\"\",\"SuiteName\":\"\",\"SuiteRange\":\"\",\"City\":\"GREEN\",\"CityAbbreviation\":\"GREEN\",\"State\":\"WY\",\"ZipCode\":\"44444\",\"Zip4\":\"9360\",\"County\":\"Warren\",\"CountyFips\":\"27\",\"CoastalCounty\":0,\"Latitude\":77.0999,\"Longitude\":-99.999,\"Fulladdress1\":\"123 NAANNA SAND RD\",\"Fulladdress2\":\"\",\"HighRiseDefault\":false}],\"WarningMessages\":[\"This mail requires a number or Apartment number.\"],\"ErrorMessages\":[],\"GeoErrorMessages\":[],\"Succeeded\":true,\"ErrorMessage\":null}" | rex "StandardizedAddres SUCCEEDED - FROM: (?<event>.*)" | spath input=event | rename AddressDetails{}.* as *, WarningMessages{} as WarningMessages | table Latitude Longitude WarningMessages
Hi Matt, Thanks for the advice.  I figured the solution out, see my reply. Are you aware of a way to pass this onto the relevant team? Thanks, Stanley
I have figured out the solution to the issue. The problem lies in the Splunk Security Essentials Saved search `Generate MITRE Data Source Lookup`. Specifically the line `| sort 0 ds_id`   Sol... See more...
I have figured out the solution to the issue. The problem lies in the Splunk Security Essentials Saved search `Generate MITRE Data Source Lookup`. Specifically the line `| sort 0 ds_id`   Solution Update the line ``` | sort 0 ds_id ``` To   ``` | sort 0 ds_id external_id ```   Hopefully this fixed search can be implemented in future app releases.  
I know this is a pretty old post, but wanted to put this here for anyone else looking.  This has bothered me for some time.   It seems timechart, as of some version, supports 3 limit options: limit... See more...
I know this is a pretty old post, but wanted to put this here for anyone else looking.  This has bothered me for some time.   It seems timechart, as of some version, supports 3 limit options: limit=N limit=topN limit=bottomN https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Timechart
First things first. 1. You posted this in Splunk SOAR section of Answers but the question seems to be about Splunk Enterprise. I'll move this thread to appropriate section but please try to be caref... See more...
First things first. 1. You posted this in Splunk SOAR section of Answers but the question seems to be about Splunk Enterprise. I'll move this thread to appropriate section but please try to be careful about where you post - the sections are there so that we keep the forums tidy and make it easier to find answers to your problems. 2. We have no idea if you have a standalone installation or clustered indexers. A cluster involves way more work to do what you're asking about and needs special care not to break it. 3. While moving indexes around is possible after their initial creation, it's a risky operation if not done properly and therefore I'd advise against attempting it by an inexperienced admin. You have been warned. One more very important thing - what do you mean by "external storage"? If you plan on moving some of your indexes onto some CIFS or NFS (depending on the system your Splunk runs on) share, forget it. This type of storage can be used for storing frozen buckets but not for searchable data. And now the second warning. You have been warned twice. Since Splunk's indexes are "just" directories on a disk there are two approaches to task of moving the data around. Option one - stop your Splunk, move the hot/warm and/or cold directories to another directory and adjust the index definition in indexes.conf accordingly, start Splunk Option two - stop your Splunk, move the hot/warm and/or cold directories to another directory and make the OS see the new location under the old location (using bind mount, symlink, junction - depending on the underlying OS), start Splunk. In case of clustered indexers you must go with the second option, at least until you've moved data in all indexers, because all indexers must share the same config so you can't reconfigure just some of the indexers. Option two is the only way to go if you wanted to move just some part of your buckets (like oldest half of cold buckets) but this is something even I wouldn't try to do in production. You have been warned thrice! Having said that - it would probably be way easier to attach an external storage as frozen storage and make Splunk rotate the older buckets there. Of course frozen buckets are not searchable so from Splunk's point of view they are effectively deleted. (And handling frozen buckets in a cluster can be tricky if you want to avoid duplicated frozen buckets). Another option (but that completely messes with your overall architecture) would be to use a remote S3-compatible storage and define smartstore-backed indexes but it is a huge ovehrhaul of your whole setup and while in some cases it can help, in others it can cause additional problems so YMMV.
Both @livehybrid and @richgalloway 's solutions are OK but the question is what problem are you actually trying to solve. It's relatively unlikely that you have - let's say - 8k or 9k characters long... See more...
Both @livehybrid and @richgalloway 's solutions are OK but the question is what problem are you actually trying to solve. It's relatively unlikely that you have - let's say - 8k or 9k characters long events which are perfectly "ok" and suddenly when the event hits the 10k limit the event is "worthless" for you so you're dropping it. It doesn't make much sense since the hard threshold of the data size doesn't seem to be a reasonable way of differentiating between different types of data. I'd be hard pressed to find a scenario where this actually makes sense instead of checking the data syntactically. BTW, Splunk operates on characters, not bytes so while TRUNCATE indeed cuts to the "about" given size in bytes, the len() functions returns number of code points (not even characters! It might differ in some scripts using composite characters) instead of bytes.
Ahh... right. If you change the license type, that might indeed cause "strange" behaviour since different license types normally don't stack and may enable different features. Hence the restart.
Hi @Piyush_Sharma37  Increase the maximum upload size limit in your Splunk Enterprise configuration. Navigate to $SPLUNK_HOME/etc/system/local/ on your Splunk server. Create or edit the web.conf ... See more...
Hi @Piyush_Sharma37  Increase the maximum upload size limit in your Splunk Enterprise configuration. Navigate to $SPLUNK_HOME/etc/system/local/ on your Splunk server. Create or edit the web.conf file. Add or modify the [settings] stanza to include max_upload_size:   [settings] max_upload_size = 2048 Set to a value in MB larger than the app file size, e.g., 2048 for 2GB   Save the web.conf file. Restart Splunk Enterprise for the changes to take effect. Attempt the installation of the "Python for Scientific Computing" app again through the UI. Splunk has a default limit on the size of apps that can be uploaded via the web interface. The "Python for Scientific Computing" app package is often larger than this default limit, causing the "file size is too big" error. Increasing the max_upload_size parameter in web.conf allows Splunk to accept larger app files during installation. Ensure you have sufficient disk space on the Splunk server where the app will be installed and unpacked. Restarting Splunk is mandatory for the configuration change to be applied. web.conf documentation You can also install it from the command line using: ./splunk install app <path/packagename> Depending on your architecture and configuration it may be that you need to install this via your Splunk Deployment server rather than a manual install.  Please review the installation docs for more information: https://docs.splunk.com/Documentation/MLApp/5.5.0/User/Installandconfigure  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Snorre  The license files are XML files inside, so if you have a look at the contents of the file in the license directory you might be able to clarify which one you applied if unsure. They each... See more...
Hi @Snorre  The license files are XML files inside, so if you have a look at the contents of the file in the license directory you might be able to clarify which one you applied if unsure. They each have a unique signature (amongst other things) inside the file. Any text edit should work for viewing them.      Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
@Piyush_Sharma37  Splunk has a default maximum upload size of 500MB for files uploaded via the web interface. You can increase this limit by editing the web.conf file. Navigate to the web.conf file... See more...
@Piyush_Sharma37  Splunk has a default maximum upload size of 500MB for files uploaded via the web interface. You can increase this limit by editing the web.conf file. Navigate to the web.conf file in your Splunk installation directory (usually found in C:\Program Files\Splunk\etc\system\local). Add or modify the following line under the [settings] stanza: [settings] max_upload_size = 1000   Save the file and restart Splunk.    web.conf - Splunk Documentation   Manual Install: If increasing the upload limit doesn’t work or you prefer a direct approach, manually install the PSC add-on.  Download the PSC add-on (.tar.gz file) from Splunkbase.  Extract the .tar.gz file to $SPLUNK_HOME/etc/apps/ (e.g., C:\Program Files\Splunk\etc\apps\). Ensure the extracted folder is named appropriately. Restart Splunk. Verify the installation in Splunk Web under Apps > Manage Apps; PSC should appear in the list.
hello everyone, myself Piyush and I'm new to this Splunk environment. I was getting along with MLTK and Python for scientific computing to develop something for the ongoing Splunk hackathon, but whil... See more...
hello everyone, myself Piyush and I'm new to this Splunk environment. I was getting along with MLTK and Python for scientific computing to develop something for the ongoing Splunk hackathon, but while I have tried several times to install it, it still shows me an XML screen saying the file size is too big. I even deleted and re-downloaded the Python file and uploaded it. However, the issue still persists and while other add ons like MLTK,etc got installed just fine. I'm on Windows and I don't have a clue how to move forward from here as I am learning about the splunk environment on the go.
Thanks for getting back to me. I started to look into the /etc/licenses folder and I toyed around with the files there, and now I think I figured out what is happening: If I install a Dev key in the... See more...
Thanks for getting back to me. I started to look into the /etc/licenses folder and I toyed around with the files there, and now I think I figured out what is happening: If I install a Dev key in the Prod environment, splunk deletes all Prod keys in the folder and creates a Restart required message in the dashboard. After restart only the files installed after the dev key is loaded.  I might very well have installed a new dev key in the prod environment as I received a renewal keys for both prod and dev in the same email. We will ask the maintenance team for a restore of the files in the licenses folder and it will probably be sorted. It would be great if splunk could show a warning when I try to do such stupid things as uploading a dev license in the prod environment, or maybe even a backup of the license files when deleting them, but I have learned now and wont be doing that again  
Normally the licenses shouldn't "disappear" on their own. Even when licenses expire, they still show as expired. The licenses are backed by files in $SPLUNK_HOME/etc/licenses so if they "disappeared"... See more...
Normally the licenses shouldn't "disappear" on their own. Even when licenses expire, they still show as expired. The licenses are backed by files in $SPLUNK_HOME/etc/licenses so if they "disappeared" someone must have deleted them. Check your backups for contents of this directory.
Installing an app on SH tier doesn't directly install the same app on indexers. Parts of apps are pushed to indexers as knowledge bundle. Anyway, back to the original question (which is a bit dated)... See more...
Installing an app on SH tier doesn't directly install the same app on indexers. Parts of apps are pushed to indexers as knowledge bundle. Anyway, back to the original question (which is a bit dated) - if you have an automatic lookup defined, you must have a lookup to back it. All "solutions" in this thread do not limit the scope of the lookup to a single SH but rather distribute the lookup across the whole environment.
I added a 30gb renewal license key that is valid from June 14th later this year. Afterward I got a message telling me to restart Splunk. I did that, and now all other licenses are missing from the Li... See more...
I added a 30gb renewal license key that is valid from June 14th later this year. Afterward I got a message telling me to restart Splunk. I did that, and now all other licenses are missing from the License admin console. Anyone experienced this before? Is there a way to recover the old licenses? Running splunk Enterprise 9.2.0.1 on prem on redhat  
While both approaches (foreach and transpose) should get you what you want, they might not have very good performance. Since "we're using first row as column names" I'm wondering if it wouldn't be e... See more...
While both approaches (foreach and transpose) should get you what you want, they might not have very good performance. Since "we're using first row as column names" I'm wondering if it wouldn't be easier if you didn't pull the data directly to Splunk but rather wrote them to a CSV file and ingested that file with indexed extractions (yes, that's often not the best way either but in this case it might be better).