All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I said this before, it's worth repeating: map is usually not the right tool.  But in this case, it can help.  You can do something like this: | makeresults format=csv data="file lk_file_abc3477.csv ... See more...
I said this before, it's worth repeating: map is usually not the right tool.  But in this case, it can help.  You can do something like this: | makeresults format=csv data="file lk_file_abc3477.csv lk_file_xare000csv lk_file_ppbc34ee.csv" | map search="inputlookup $lookup$ | stats values(duration_time) AS duration_time by path | makemv delim="\n " duration_time | eval duration_time=split(duration_time," ") | stats p90(duration_time) as "90th percentile (sec)" by path | sort path | sendmail someone@example.com"  
You have made a number of errors with your field naming - you are mixing Logs and logs - to Splunk these are different fields, so in your first example you do | eval logs=case(count>0, "1", count=0,... See more...
You have made a number of errors with your field naming - you are mixing Logs and logs - to Splunk these are different fields, so in your first example you do | eval logs=case(count>0, "1", count=0, "2") | eval Status=case(Logs=1, "Green", Logs=2, "Red") where you are testing Logs in the second statement, but set logs in the first and in your latest post you do | fillnull logs which will create a lower case logs field with a value of 0, which you then immediately follow with a fillnull for Logs. So, take care with field names. 
Your event is a heading, followed by a JSON object, so one approach is to simply create a field extraction to extract the JSON object and then you have access to all the fields directly. This exampl... See more...
Your event is a heading, followed by a JSON object, so one approach is to simply create a field extraction to extract the JSON object and then you have access to all the fields directly. This example shows what that would look like - the rex statement extracts the JSON inline, but you could do that as a calculated field. The spath parses the JSON | makeresults | eval _raw="StandardizedAddres SUCCEEDED - FROM: {\"StandardizedAddres\":\"SUCCEEDED\",\"FROM\":{\"Address1\":\"123 NAANNA SAND RD\",\"Address2\":\"\",\"City\":\"GREEN\",\"County\":null,\"State\":\"WY\",\"ZipCode\":\"44444-9360\",\"Latitude\":null,\"Longitude\":null,\"IsStandardized\":true,\"AddressStatus\":1,\"AddressStandardizationType\":0},\"RESULT\":1,\"AddressDetails\":[{\"AssociatedName\":\"\",\"HouseNumber\":\"123\",\"Predirection\":\"\",\"StreetName\":\"NAANNA SAND RD\",\"Suffix\":\"RD\",\"Postdirection\":\"\",\"SuiteName\":\"\",\"SuiteRange\":\"\",\"City\":\"GREEN\",\"CityAbbreviation\":\"GREEN\",\"State\":\"WY\",\"ZipCode\":\"44444\",\"Zip4\":\"9360\",\"County\":\"Warren\",\"CountyFips\":\"27\",\"CoastalCounty\":0,\"Latitude\":77.0999,\"Longitude\":-99.999,\"Fulladdress1\":\"123 NAANNA SAND RD\",\"Fulladdress2\":\"\",\"HighRiseDefault\":false}],\"WarningMessages\":[\"This mail requires a number or Apartment number.\"],\"ErrorMessages\":[],\"GeoErrorMessages\":[],\"Succeeded\":true,\"ErrorMessage\":null}" | rex "StandardizedAddres SUCCEEDED - FROM: (?<event>.*)" | spath input=event | rename AddressDetails{}.* as *, WarningMessages{} as WarningMessages | table Latitude Longitude WarningMessages Note that your AddressDetails is actually a JSON array, so in theory it could contain multiple results, so doing this with the JSON extraction will handle any possible case where you get more than one result in the address array.
Hi @livehybrid  The goal is a single execution of the search/query below for each file e.g.: lk_file_abc3477.csv, lk_file_xare000csv, lk_file_ppbc34ee.csv, etc.. and send an email for each of them... See more...
Hi @livehybrid  The goal is a single execution of the search/query below for each file e.g.: lk_file_abc3477.csv, lk_file_xare000csv, lk_file_ppbc34ee.csv, etc.. and send an email for each of them individually. | inputlookup lk_file_abc3477.csv | stats values(duration_time) AS duration_time by path | makemv delim="\n " duration_time | eval duration_time=split(duration_time," ") | stats p90(duration_time) as "90th percentile (sec)" by path | sort path Regards
Thank you for the link, unfortunately I've been using that page with the regional numbers with no luck, I've been trying to contact the US public sector sales team or regular sales team. I've called ... See more...
Thank you for the link, unfortunately I've been using that page with the regional numbers with no luck, I've been trying to contact the US public sector sales team or regular sales team. I've called several times a day, left messages, tried to contact via web, attempted to email and filled out the form and left my information. 
Hi @dmcnulty  On the license page of your LM - is it listing it as "Enterprise license group"  at the moment, not Free license group? If its Free licence group then you need to switch to Enterprise,... See more...
Hi @dmcnulty  On the license page of your LM - is it listing it as "Enterprise license group"  at the moment, not Free license group? If its Free licence group then you need to switch to Enterprise, at which point it should start using your dev license.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @RSS_STT  It is breaking because it is treating the double quotes as the end of the string. Is Message=* the last part of your event, or is there more text after the message? If its always the l... See more...
Hi @RSS_STT  It is breaking because it is treating the double quotes as the end of the string. Is Message=* the last part of your event, or is there more text after the message? If its always the last part of the event then you could use the following rex command to create a new "fullMessage" field: | rex field=_raw "Message\=\"(?<fullMessage>.+)\"$" See screenshot of an example:   | windbag | head 1 | eval _raw="User=testing Message=\" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server \"RO76 is currently down / unreachable.\"" | rex field=_raw "Message\=\"(?<fullMessage>.+)\"$" | table _time fullMessage  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @JMPP  What is your search doing? Without seeing its not completely clear but if you have a scheduled search running to manipulate these csv files then you could have that trigger an email alert ... See more...
Hi @JMPP  What is your search doing? Without seeing its not completely clear but if you have a scheduled search running to manipulate these csv files then you could have that trigger an email alert action on completion of the search.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @msarkaus  The following should hopefully work for you: | rex "\"Latitude\"\s*:\s*(?<Latitude>-?\d+\.\d+)" | rex "\"Longitude\"\s*:\s*(?<Longitude>-?\d+\.\d+)" | rex "\"WarningMessages\"\s*:\s*\... See more...
Hi @msarkaus  The following should hopefully work for you: | rex "\"Latitude\"\s*:\s*(?<Latitude>-?\d+\.\d+)" | rex "\"Longitude\"\s*:\s*(?<Longitude>-?\d+\.\d+)" | rex "\"WarningMessages\"\s*:\s*\[\s*\"(?<WarningMessages>[^\"]*)" | table _time Latitude Longitude WarningMessages Here is a full working example for you to try with:   | windbag | head 1 | eval _raw="StandardizedAddres SUCCEEDED - FROM: {\"StandardizedAddres\":\"SUCCEEDED\",\"FROM\":{\"Address1\":\"123 NAANNA SAND RD\",\"Address2\":\"\",\"City\":\"GREEN\",\"County\":null,\"State\":\"WY\",\"ZipCode\":\"44444-9360\",\"Latitude\":null,\"Longitude\":null,\"IsStandardized\":true,\"AddressStatus\":1,\"AddressStandardizationType\":0},\"RESULT\":1,\"AddressDetails\":[{\"AssociatedName\":\"\",\"HouseNumber\":\"123\",\"Predirection\":\"\",\"StreetName\":\"NAANNA SAND RD\",\"Suffix\":\"RD\",\"Postdirection\":\"\",\"SuiteName\":\"\",\"SuiteRange\":\"\",\"City\":\"GREEN\",\"CityAbbreviation\":\"GREEN\",\"State\":\"WY\",\"ZipCode\":\"44444\",\"Zip4\":\"9360\",\"County\":\"Warren\",\"CountyFips\":\"27\",\"CoastalCounty\":0,\"Latitude\":77.0999,\"Longitude\":-99.999,\"Fulladdress1\":\"123 NAANNA SAND RD\",\"Fulladdress2\":\"\",\"HighRiseDefault\":false}],\"WarningMessages\":[\"This mail requires a number or Apartment number.\"],\"ErrorMessages\":[],\"GeoErrorMessages\":[],\"Succeeded\":true,\"ErrorMessage\":null}" | rex "\"Latitude\"\s*:\s*(?<Latitude>-?\d+\.\d+)" | rex "\"Longitude\"\s*:\s*(?<Longitude>-?\d+\.\d+)" | rex "\"WarningMessages\"\s*:\s*\[\s*\"(?<WarningMessages>[^\"]*)" | table _time Latitude Longitude WarningMessages  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @dionrivera  Modify the data input configuration within the Splunk Add-on for ServiceNow to apply filters to the CMDB data collection. Instead of querying the entire table, specify criteria to re... See more...
Hi @dionrivera  Modify the data input configuration within the Splunk Add-on for ServiceNow to apply filters to the CMDB data collection. Instead of querying the entire table, specify criteria to retrieve only the necessary subset of records. If you need to, create multiple inputs each with their own filtering criteria. Use ServiceNow's encoded query syntax within the "Filter parameters" field of the CMDB input configuration in the Splunk Add-on. For example, to pull only active Linux servers: sys_class_name=cmdb_ci_linux_server^operational_status=1 Querying a very large table (10 million+ records) without filters often leads to performance degradation and timeouts in ServiceNow. By applying specific filters in the Splunk add-on's input configuration, you significantly reduce the amount of data ServiceNow needs to process and return, thereby avoiding long-running SQL queries and associated errors. Work with your ServiceNow administrator to identify the most efficient filters and ensure appropriate database indexes exist on the ServiceNow side for the fields used in your filter (e.g., sys_class_name, operational_status, sys_updated_on). Test your encoded query directly within ServiceNow's table list view first to validate its correctness and performance before configuring it in the Splunk add-on. Consider incremental fetching by filtering on sys_updated_on to only pull records that have changed since the last poll, rather than repeatedly pulling static data. Splunk Add-on for ServiceNow Documentation: https://docs.splunk.com/Documentation/AddOns/latest/ServiceNow/ConfigureInputs ServiceNow Filtering docs: https://www.servicenow.com/docs/bundle/xanadu-platform-user-interface/page/use/common-ui-elements/reference/r_OpAvailableFiltersQueries.html Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @OscarAlva  How have you been contacting sales? There is a list of regional contacts/contact methods available at https://www.splunk.com/en_us/about-splunk/contact-us.html#sales  Did this answ... See more...
Hi @OscarAlva  How have you been contacting sales? There is a list of regional contacts/contact methods available at https://www.splunk.com/en_us/about-splunk/contact-us.html#sales  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Cheng2Ready  If you apply multiple field extractions then the one with the highest precedence will be used, instead you may wish to manually modify the regular expression to cover both events.  ... See more...
Hi @Cheng2Ready  If you apply multiple field extractions then the one with the highest precedence will be used, instead you may wish to manually modify the regular expression to cover both events.  When extracting the fields using the field extractor wizard, on the "Select fields" step, select the "Show regular expression" text as below: This then allows you to click "Edit regular expression" button on the right, and clicking this gives you the regex which you can override. At this point you should define a regex that matches all the relevant events.  If you need help creating the regex please post raw examples/samples of the events and I'd be happy to help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
When using the Field Extractor can you use the same name for a field? will it append or add to the original field create Example I am extracting from the _raw data Found that some of the _raw da... See more...
When using the Field Extractor can you use the same name for a field? will it append or add to the original field create Example I am extracting from the _raw data Found that some of the _raw data didn't match when I highlight when using regex match I was getting the red x as the example below, that should of captured it since both logs are identical in patterns. So I extracted twice on a single field on two data sets. will it append? And add it onto the field of data to look for?  
I'm Attempting to speak with someone in sales. I cant seem to get ahold of anyone. Anyone have any tips to help expedite this?
1) Root cause.   It appears that this can happen when enableSched is set to "1" or "true",  but the set of actual alerting properties is somehow invalid.     For example if the disabled alert has a... See more...
1) Root cause.   It appears that this can happen when enableSched is set to "1" or "true",  but the set of actual alerting properties is somehow invalid.     For example if the disabled alert has action.email = 1 but specifies no value for action.email.to,  then the green "enable" button will quietly fail for all users, even admins.  It posts nothing to the backend and displays no message to the user. 2) Workaround - you can go to "Edit > Advanced  Edit",  and then scroll way down to find "is_scheduled".  Change this from "true" to "false" and submit.  Now you will be able to "enable" the savedsearch.  And then when you click "edit schedule", you'll be able to re-enable scheduling and then the UI will tell you what required keys aren't populated yet. (For App Developers - there are valid reasons to ship a disabled alert,  with a specific cron schedule that is somehow tied to the SPL for instance.     I believe another workaround would be to specify "example@example.com"  as the action.email.to key.  This may seem strange but the "example.com" domain is, according to RFC 2606 and RFC 6761 a reserved domain that is only for documentations and examples.)
Posting this in case other folks run into it.    It's possible for an app to ship an alert disabled,  in such a way that when any user tries to enable it via going to manager and selecting "Edit ... See more...
Posting this in case other folks run into it.    It's possible for an app to ship an alert disabled,  in such a way that when any user tries to enable it via going to manager and selecting "Edit > Enable",   it doesn't work. Instead of enabling the alert, nothing happens at all.   You click the green button and nothing happens. Looking at the browser console,  there are no errors when this happens and the javascript makes no attempt to post anything at all to Splunk.   The question has two parts.   -- what is the root cause of this,  and how can folks avoid accidentally shipping app content like this? -- what workaround might exist for the end users who need to enable the disabled alert?  
Hello, I have this Splunk log that contains tons of quotes, commas, and other special characters. I’m trying to only pull the Latitude":77.0999, Longitude":-99.999 and from time to time there will b... See more...
Hello, I have this Splunk log that contains tons of quotes, commas, and other special characters. I’m trying to only pull the Latitude":77.0999, Longitude":-99.999 and from time to time there will be WarningMessages: This mail requires a number or Apartment number that I would like to capture in a dashboard. StandardizedAddres SUCCEEDED - FROM: {"Address1":"123 NAANNA SAND RD","Address2":"","City":”GREEN","County":null,"State":"WY","ZipCode":"44444-9360","Latitude":null,"Longitude":null,"IsStandardized":true,"AddressStatus":1,"AddressStandardizationType":0} RESULT: 1 | {"AddressDetails":[{"AssociatedName":"","HouseNumber":"123","Predirection":"","StreetName":" NAANNA SAND RD ","Suffix":"RD","Postdirection":"","SuiteName":"","SuiteRange":"","City":" GREEN","CityAbbreviation":"GREEN","State":"WY","ZipCode":"44444","Zip4":"9360","County":"Warren","CountyFips":"27","CoastalCounty":0,"Latitude":77.0999,"Longitude":-99.999,"Fulladdress1":"123 NAANNA SAND RD ","Fulladdress2":"","HighRiseDefault":false}]," WarningMessages":["This mail requires a number or Apartment number."]:[],"ErrorMessages":[],"GeoErrorMessages":[],"Succeeded":true,"ErrorMessage":null}   I currently use the query below, but I’m not having any luck. This is past my skill set, please help…. index="cf" Environment="NA" msgTxt="API=/api-123BusOwnCommon/notis*" | eval msgTxt=" API=/api-123BusOwnCommon/notis /WGR97304666665/05-08-2024 CalStatus=Success Controller=InsideApi_ notis Action= notis Duration=3 data*" | rex "Duration=(?<Duration>\w+)" | timechart span=1h avg(Duration) AS avg_response by msgTxt   I'd like to show the data like this in Splunk: Latitude       Longitude    WarningMessages 2.351           42.23           Error in blah 4.10             88.235          Hello world 454.2           50.02            Blah blah blah blah...............   Thank you
Hi Splunk Community team, Please help: I have N number of lookup lk_file_abc3477.csv, lk_file_xare000csv, lk_file_ppbc34ee.csv, etc.... files. I have a splunk search/script that will be proc... See more...
Hi Splunk Community team, Please help: I have N number of lookup lk_file_abc3477.csv, lk_file_xare000csv, lk_file_ppbc34ee.csv, etc.... files. I have a splunk search/script that will be processing the same data type and same number of columns and my question is, is there any way to process each file and send an email for each individually, using Reports or Alerts option or any other way in one single execution? Regards,
What problem are you trying to solve this way? If you want to adjust criticality of an alert depending on an asset affected - that's the functionality of Enterprise Security.
Well, it all depends on your utilization really. The rule of thumb is that a single indexer can handle up to 300GB/day if not running premium apps (ES or ITSI) or 100 GB/day if running ES or ITSI. Ac... See more...
Well, it all depends on your utilization really. The rule of thumb is that a single indexer can handle up to 300GB/day if not running premium apps (ES or ITSI) or 100 GB/day if running ES or ITSI. Actually a single indexer can index way way more daily if it doesn't do any searching. Since you're using ES there's probably gonna be a lot of searching (if not for any other reason, just for keeping datamodel summaries up to date). So one indexer per 200GB might be or not too small, depending on your actual load. You're pushing quite a lot of hardware for the indexers whereas normally you'd rather want to have more indexers than bigger ones. More CPUs mean you could add ingestion pipelines but - especially if reaching for cold data - you might starve your indexers from I/O performance since you will have potential for many concurrent searches competing for I/O resources. It's also not clear for me how is this NAS frozen spacd supposed to work. Is it a shared space or do you want to have dedicated share for each indexer? Remember that each indexer freezes buckets independently so unless you script it to keep the storage "tidy" you'll end ul with multiple copies of the same frozen bucket.