All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,  Your response was the key to make the idea I had happened. I have to made some changes to the query. Since I have  a long file list, I decided to list them with "| rest" command, then I wa... See more...
Hi,  Your response was the key to make the idea I had happened. I have to made some changes to the query. Since I have  a long file list, I decided to list them with "| rest" command, then I was getting wildcard issues and I had to make macros to overcome that problem.   Now I working with the Splunk admin team because I am getting the error below casuse by "| sendemail" and it is caused by a missing admin access: [map]: command="sendemail", 'rootCAPath' while sending mail to: jpichardo@jaggaer.com I cannot use "| sendresults" command because the version we have does not support it. | rest /servicesNS/-/-/data/lookup-table-files f=title splunk_server=local ```To avoid the "you do not have the "dispatch_rest_to_indexers" capability" warning``` | fields title | search title="lk_file*.csv" | dedup title | map maxsearches=9999 search="inputlookup $title$ |eval filename=$title$ | search path!=`macroDoubleQuotation` | stats values(duration_time) AS duration_time by path filename | `macroMakemvNewLineDelimeter` duration_time | eval duration_time=`macroSplitSpace` | `macroPerformanceP90` | sort path | `macroSendMailPerformanceSlaList` Thanks so much for help me with!!!. Regards,
Ugh. Unfortunately you have your data in this highly inconvenient form of fielname=something,fieldvalue=something fromwhich you should deduce something=something. Yes, you can do spath and actually f... See more...
Ugh. Unfortunately you have your data in this highly inconvenient form of fielname=something,fieldvalue=something fromwhich you should deduce something=something. Yes, you can do spath and actually foreach might prove to be better than mvexpand but it won't be pretty. The main problem with this data format is that in order to do anything reasonable with it (including initial filtering) is to process it and transform to something completely different. If your dataset size isn't that big and you're not gonna filter the events anyway, you can get by with it. But if you wanted to select just one user... you'd still need to dig through all your events. That's not a very efficient way to do so. So while I usually say that as a rule of thumb do not fiddle with raw regexes over structured data in this case if you are absolutely sure that the format is always like this ( {a:b,c:d} -> b=d ) you could hazard doing a regex-based extraction as long as you're aware of the risks. Alternatively you could use summary indexing to just once per event transform it to the desired format with properly rendered key=value pairs and then search from the summary index.
I'm trying to replace the default SSL certs on the deployment server with third-party certs but I'm confused about what it entails. I don't know much about TLS certs, mostly just the basics. I'm foll... See more...
I'm trying to replace the default SSL certs on the deployment server with third-party certs but I'm confused about what it entails. I don't know much about TLS certs, mostly just the basics. I'm following these documents for the deployment server-to-client: https://docs.splunk.com/Documentation/Splunk/latest/Security/StepstoSecuringSplunkwithTLS https://docs.splunk.com/Documentation/Splunk/latest/Security/ConfigTLScertsS2S.  If I make the changes on the deployment server to point to the third-party cert, do I also need to change the cert on the UFs to keep communicating on port 8089?  
I'm continuing to work on dashboards to report on user activity on our application. Going through the knowledgebase, bootcamp slides, and google, trying to determine the best route to report on the v... See more...
I'm continuing to work on dashboards to report on user activity on our application. Going through the knowledgebase, bootcamp slides, and google, trying to determine the best route to report on the values in logs files such as this one.  The dashboards I am creating is showing activity in the various modules, what values are getting select and what is being pulled up. I looked at spath and mvexpand and wasn't getting the results I was hoping for, might have been I wasn't formatting the search correctly and also how green myself and work is to Splunk. Creating field extractions has worked for the most part to pull the specific values I wanted to report but further on, I'm finding incorrect values being pulled in. Below is one such event that's been sanitized and it's in valid JSON format.  I'm trying to do a table event showing the userName, date and time, serverHost, SparklingTypeId, PageSize, and PageNumber. The other values not so much.  Is spath and MV expand along with eval statements the best course? I was using field extractions in a couple other modules but then found incorrect values were being added.  {"auditResultSets":null,"schema":"com","storedProcedureName":"SpongeGetBySearchCriteria","commandText":"com.SpongeGetBySearchCriteria","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@SpongeTypeId","value":null},{"name":"@CustomerNameStartWith","value":null},{"name":"@IsAssigned","value":null},{"name":"@IsAssignedToIdIsNULL","value":false},{"name":"@SpongeStatusIdsCSV","value":",1,"},{"name":"@RequestingValueId","value":null},{"name":"@RequestingStaffId","value":null},{"name":"@IsParamOther","value":false},{"name":"@AssignedToId","value":null},{"name":"@MALLLocationId","value":8279},{"name":"@AssignedDateFrom","value":null},{"name":"@AssignedDateTo","value":null},{"name":"@RequestDateFrom","value":null},{"name":"@RequestDateTo","value":null},{"name":"@DueDateFrom","value":null},{"name":"@DueDateTo","value":null},{"name":"@ExcludeCustomerFlagTypeIdsCSV","value":",1,"},{"name":"@PageSize","value":25},{"name":"@PageNumber","value":1},{"name":"@SortColumnName","value":"RequestDate"},{"name":"@SortDirection","value":"DESC"},{"name":"@HasAnySparkling","value":null},{"name":"@SparklingTypeId","value":null},{"name":"@SparklingSubTypeId","value":null},{"name":"@SparklingStatusId","value":null},{"name":"@SparklingDateFrom","value":null},{"name":"@SparklingDateTo","value":null},{"name":"@SupervisorId","value":null},{"name":"@Debug","value":null}],"serverIPAddress":"255.255.000.000","serverHost":"WEBSERVER","clientIPAddress":"255.255.255.255","sourceSystem":"WebSite","module":"Vendor.Product.BLL.Community","accessDate":"2025-04-30T15:34:33.3568918-06:00","userId":3231,"userName":"PeterVenkman","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.Community.Operations.SpongeSearch","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.Community.SpongeManager","method":"SpongeSearch"}]}  
limit=N is the same as limit=topN And the bottomN appeared in 8.1, which was several years ago
this was perfect, thank you!
Scratch that first line.. Use this index="ifi" appEnvrnNam="ANY" msgTxt="Standardizess SUCCEEDED - FROM:*"
  index="ifi" appEnvrnNam="ANY" msgTxt="StandardizedAddress SUCCEEDED*" | eval _raw="Standardizedss SUCCEEDED - FROM: {\"Standardizedss \":\"SUCCEEDED\",\"FROM\":{\"Address1\":\"123 NAANNA SAND RD\... See more...
  index="ifi" appEnvrnNam="ANY" msgTxt="StandardizedAddress SUCCEEDED*" | eval _raw="Standardizedss SUCCEEDED - FROM: {\"Standardizedss \":\"SUCCEEDED\",\"FROM\":{\"Address1\":\"123 NAANNA SAND RD\",\"Address2\":\"\",\"City\":\"GREEN\",\"County\":null,\"State\":\"WY\",\"ZipCode\":\"44444-9360\",\"Latitude\":null,\"Longitude\":null,\"IsStandardized\":true,\"AddressStandardization\":1,\"AddressStandardizationType\":0},\"RESULT\":1,\"AddressDetails\":[{\"AssociatedName\":\"\",\"HouseNumber\":\"123\",\"Predirection\":\"\",\"StreetName\":\"NAANNA SAND RD\",\"Suffix\":\"RD\",\"Postdirection\":\"\",\"SuiteName\":\"\",\"SuiteRange\":\"\",\"City\":\"GREEN\",\"CityAbbreviation\":\"GREEN\",\"State\":\"WY\",\"ZipCode\":\"44444\",\"Zip4\":\"9360\",\"County\":\"Warren\",\"CountyFips\":\"27\",\"CoastalCounty\":0,\"Latitude\":77.0999,\"Longitude\":-99.999,\"Fulladdress1\":\"123 NAANNA SAND RD\",\"Fulladdress2\":\"\",\"HighRiseDefault\":false}],\"WarningMessages\":[\"This mail requires a number or Apartment number.\"],\"ErrorMessages\":[],\"GeoErrorMessages\":[],\"Succeeded\":true,\"ErrorMessage\":null}" | rex "StandardizedAddres SUCCEEDED - FROM: (?<event>.*)" | spath input=event | rename AddressDetails{}.* as *, WarningMessages{} as WarningMessages | table Latitude Longitude WarningMessages
Hi Matt, Thanks for the advice.  I figured the solution out, see my reply. Are you aware of a way to pass this onto the relevant team? Thanks, Stanley
I have figured out the solution to the issue. The problem lies in the Splunk Security Essentials Saved search `Generate MITRE Data Source Lookup`. Specifically the line `| sort 0 ds_id`   Sol... See more...
I have figured out the solution to the issue. The problem lies in the Splunk Security Essentials Saved search `Generate MITRE Data Source Lookup`. Specifically the line `| sort 0 ds_id`   Solution Update the line ``` | sort 0 ds_id ``` To   ``` | sort 0 ds_id external_id ```   Hopefully this fixed search can be implemented in future app releases.  
I know this is a pretty old post, but wanted to put this here for anyone else looking.  This has bothered me for some time.   It seems timechart, as of some version, supports 3 limit options: limit... See more...
I know this is a pretty old post, but wanted to put this here for anyone else looking.  This has bothered me for some time.   It seems timechart, as of some version, supports 3 limit options: limit=N limit=topN limit=bottomN https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Timechart
First things first. 1. You posted this in Splunk SOAR section of Answers but the question seems to be about Splunk Enterprise. I'll move this thread to appropriate section but please try to be caref... See more...
First things first. 1. You posted this in Splunk SOAR section of Answers but the question seems to be about Splunk Enterprise. I'll move this thread to appropriate section but please try to be careful about where you post - the sections are there so that we keep the forums tidy and make it easier to find answers to your problems. 2. We have no idea if you have a standalone installation or clustered indexers. A cluster involves way more work to do what you're asking about and needs special care not to break it. 3. While moving indexes around is possible after their initial creation, it's a risky operation if not done properly and therefore I'd advise against attempting it by an inexperienced admin. You have been warned. One more very important thing - what do you mean by "external storage"? If you plan on moving some of your indexes onto some CIFS or NFS (depending on the system your Splunk runs on) share, forget it. This type of storage can be used for storing frozen buckets but not for searchable data. And now the second warning. You have been warned twice. Since Splunk's indexes are "just" directories on a disk there are two approaches to task of moving the data around. Option one - stop your Splunk, move the hot/warm and/or cold directories to another directory and adjust the index definition in indexes.conf accordingly, start Splunk Option two - stop your Splunk, move the hot/warm and/or cold directories to another directory and make the OS see the new location under the old location (using bind mount, symlink, junction - depending on the underlying OS), start Splunk. In case of clustered indexers you must go with the second option, at least until you've moved data in all indexers, because all indexers must share the same config so you can't reconfigure just some of the indexers. Option two is the only way to go if you wanted to move just some part of your buckets (like oldest half of cold buckets) but this is something even I wouldn't try to do in production. You have been warned thrice! Having said that - it would probably be way easier to attach an external storage as frozen storage and make Splunk rotate the older buckets there. Of course frozen buckets are not searchable so from Splunk's point of view they are effectively deleted. (And handling frozen buckets in a cluster can be tricky if you want to avoid duplicated frozen buckets). Another option (but that completely messes with your overall architecture) would be to use a remote S3-compatible storage and define smartstore-backed indexes but it is a huge ovehrhaul of your whole setup and while in some cases it can help, in others it can cause additional problems so YMMV.
Both @livehybrid and @richgalloway 's solutions are OK but the question is what problem are you actually trying to solve. It's relatively unlikely that you have - let's say - 8k or 9k characters long... See more...
Both @livehybrid and @richgalloway 's solutions are OK but the question is what problem are you actually trying to solve. It's relatively unlikely that you have - let's say - 8k or 9k characters long events which are perfectly "ok" and suddenly when the event hits the 10k limit the event is "worthless" for you so you're dropping it. It doesn't make much sense since the hard threshold of the data size doesn't seem to be a reasonable way of differentiating between different types of data. I'd be hard pressed to find a scenario where this actually makes sense instead of checking the data syntactically. BTW, Splunk operates on characters, not bytes so while TRUNCATE indeed cuts to the "about" given size in bytes, the len() functions returns number of code points (not even characters! It might differ in some scripts using composite characters) instead of bytes.
Ahh... right. If you change the license type, that might indeed cause "strange" behaviour since different license types normally don't stack and may enable different features. Hence the restart.
Hi @Piyush_Sharma37  Increase the maximum upload size limit in your Splunk Enterprise configuration. Navigate to $SPLUNK_HOME/etc/system/local/ on your Splunk server. Create or edit the web.conf ... See more...
Hi @Piyush_Sharma37  Increase the maximum upload size limit in your Splunk Enterprise configuration. Navigate to $SPLUNK_HOME/etc/system/local/ on your Splunk server. Create or edit the web.conf file. Add or modify the [settings] stanza to include max_upload_size:   [settings] max_upload_size = 2048 Set to a value in MB larger than the app file size, e.g., 2048 for 2GB   Save the web.conf file. Restart Splunk Enterprise for the changes to take effect. Attempt the installation of the "Python for Scientific Computing" app again through the UI. Splunk has a default limit on the size of apps that can be uploaded via the web interface. The "Python for Scientific Computing" app package is often larger than this default limit, causing the "file size is too big" error. Increasing the max_upload_size parameter in web.conf allows Splunk to accept larger app files during installation. Ensure you have sufficient disk space on the Splunk server where the app will be installed and unpacked. Restarting Splunk is mandatory for the configuration change to be applied. web.conf documentation You can also install it from the command line using: ./splunk install app <path/packagename> Depending on your architecture and configuration it may be that you need to install this via your Splunk Deployment server rather than a manual install.  Please review the installation docs for more information: https://docs.splunk.com/Documentation/MLApp/5.5.0/User/Installandconfigure  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Snorre  The license files are XML files inside, so if you have a look at the contents of the file in the license directory you might be able to clarify which one you applied if unsure. They each... See more...
Hi @Snorre  The license files are XML files inside, so if you have a look at the contents of the file in the license directory you might be able to clarify which one you applied if unsure. They each have a unique signature (amongst other things) inside the file. Any text edit should work for viewing them.      Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
@Piyush_Sharma37  Splunk has a default maximum upload size of 500MB for files uploaded via the web interface. You can increase this limit by editing the web.conf file. Navigate to the web.conf file... See more...
@Piyush_Sharma37  Splunk has a default maximum upload size of 500MB for files uploaded via the web interface. You can increase this limit by editing the web.conf file. Navigate to the web.conf file in your Splunk installation directory (usually found in C:\Program Files\Splunk\etc\system\local). Add or modify the following line under the [settings] stanza: [settings] max_upload_size = 1000   Save the file and restart Splunk.    web.conf - Splunk Documentation   Manual Install: If increasing the upload limit doesn’t work or you prefer a direct approach, manually install the PSC add-on.  Download the PSC add-on (.tar.gz file) from Splunkbase.  Extract the .tar.gz file to $SPLUNK_HOME/etc/apps/ (e.g., C:\Program Files\Splunk\etc\apps\). Ensure the extracted folder is named appropriately. Restart Splunk. Verify the installation in Splunk Web under Apps > Manage Apps; PSC should appear in the list.
hello everyone, myself Piyush and I'm new to this Splunk environment. I was getting along with MLTK and Python for scientific computing to develop something for the ongoing Splunk hackathon, but whil... See more...
hello everyone, myself Piyush and I'm new to this Splunk environment. I was getting along with MLTK and Python for scientific computing to develop something for the ongoing Splunk hackathon, but while I have tried several times to install it, it still shows me an XML screen saying the file size is too big. I even deleted and re-downloaded the Python file and uploaded it. However, the issue still persists and while other add ons like MLTK,etc got installed just fine. I'm on Windows and I don't have a clue how to move forward from here as I am learning about the splunk environment on the go.
Thanks for getting back to me. I started to look into the /etc/licenses folder and I toyed around with the files there, and now I think I figured out what is happening: If I install a Dev key in the... See more...
Thanks for getting back to me. I started to look into the /etc/licenses folder and I toyed around with the files there, and now I think I figured out what is happening: If I install a Dev key in the Prod environment, splunk deletes all Prod keys in the folder and creates a Restart required message in the dashboard. After restart only the files installed after the dev key is loaded.  I might very well have installed a new dev key in the prod environment as I received a renewal keys for both prod and dev in the same email. We will ask the maintenance team for a restore of the files in the licenses folder and it will probably be sorted. It would be great if splunk could show a warning when I try to do such stupid things as uploading a dev license in the prod environment, or maybe even a backup of the license files when deleting them, but I have learned now and wont be doing that again  
Normally the licenses shouldn't "disappear" on their own. Even when licenses expire, they still show as expired. The licenses are backed by files in $SPLUNK_HOME/etc/licenses so if they "disappeared"... See more...
Normally the licenses shouldn't "disappear" on their own. Even when licenses expire, they still show as expired. The licenses are backed by files in $SPLUNK_HOME/etc/licenses so if they "disappeared" someone must have deleted them. Check your backups for contents of this directory.