All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

When using the Field Extractor can you use the same name for a field? will it append or add to the original field create Example I am extracting from the _raw data Found that some of the _raw da... See more...
When using the Field Extractor can you use the same name for a field? will it append or add to the original field create Example I am extracting from the _raw data Found that some of the _raw data didn't match when I highlight when using regex match I was getting the red x as the example below, that should of captured it since both logs are identical in patterns. So I extracted twice on a single field on two data sets. will it append? And add it onto the field of data to look for?  
I'm Attempting to speak with someone in sales. I cant seem to get ahold of anyone. Anyone have any tips to help expedite this?
1) Root cause.   It appears that this can happen when enableSched is set to "1" or "true",  but the set of actual alerting properties is somehow invalid.     For example if the disabled alert has a... See more...
1) Root cause.   It appears that this can happen when enableSched is set to "1" or "true",  but the set of actual alerting properties is somehow invalid.     For example if the disabled alert has action.email = 1 but specifies no value for action.email.to,  then the green "enable" button will quietly fail for all users, even admins.  It posts nothing to the backend and displays no message to the user. 2) Workaround - you can go to "Edit > Advanced  Edit",  and then scroll way down to find "is_scheduled".  Change this from "true" to "false" and submit.  Now you will be able to "enable" the savedsearch.  And then when you click "edit schedule", you'll be able to re-enable scheduling and then the UI will tell you what required keys aren't populated yet. (For App Developers - there are valid reasons to ship a disabled alert,  with a specific cron schedule that is somehow tied to the SPL for instance.     I believe another workaround would be to specify "example@example.com"  as the action.email.to key.  This may seem strange but the "example.com" domain is, according to RFC 2606 and RFC 6761 a reserved domain that is only for documentations and examples.)
Posting this in case other folks run into it.    It's possible for an app to ship an alert disabled,  in such a way that when any user tries to enable it via going to manager and selecting "Edit ... See more...
Posting this in case other folks run into it.    It's possible for an app to ship an alert disabled,  in such a way that when any user tries to enable it via going to manager and selecting "Edit > Enable",   it doesn't work. Instead of enabling the alert, nothing happens at all.   You click the green button and nothing happens. Looking at the browser console,  there are no errors when this happens and the javascript makes no attempt to post anything at all to Splunk.   The question has two parts.   -- what is the root cause of this,  and how can folks avoid accidentally shipping app content like this? -- what workaround might exist for the end users who need to enable the disabled alert?  
Hello, I have this Splunk log that contains tons of quotes, commas, and other special characters. I’m trying to only pull the Latitude":77.0999, Longitude":-99.999 and from time to time there will b... See more...
Hello, I have this Splunk log that contains tons of quotes, commas, and other special characters. I’m trying to only pull the Latitude":77.0999, Longitude":-99.999 and from time to time there will be WarningMessages: This mail requires a number or Apartment number that I would like to capture in a dashboard. StandardizedAddres SUCCEEDED - FROM: {"Address1":"123 NAANNA SAND RD","Address2":"","City":”GREEN","County":null,"State":"WY","ZipCode":"44444-9360","Latitude":null,"Longitude":null,"IsStandardized":true,"AddressStatus":1,"AddressStandardizationType":0} RESULT: 1 | {"AddressDetails":[{"AssociatedName":"","HouseNumber":"123","Predirection":"","StreetName":" NAANNA SAND RD ","Suffix":"RD","Postdirection":"","SuiteName":"","SuiteRange":"","City":" GREEN","CityAbbreviation":"GREEN","State":"WY","ZipCode":"44444","Zip4":"9360","County":"Warren","CountyFips":"27","CoastalCounty":0,"Latitude":77.0999,"Longitude":-99.999,"Fulladdress1":"123 NAANNA SAND RD ","Fulladdress2":"","HighRiseDefault":false}]," WarningMessages":["This mail requires a number or Apartment number."]:[],"ErrorMessages":[],"GeoErrorMessages":[],"Succeeded":true,"ErrorMessage":null}   I currently use the query below, but I’m not having any luck. This is past my skill set, please help…. index="cf" Environment="NA" msgTxt="API=/api-123BusOwnCommon/notis*" | eval msgTxt=" API=/api-123BusOwnCommon/notis /WGR97304666665/05-08-2024 CalStatus=Success Controller=InsideApi_ notis Action= notis Duration=3 data*" | rex "Duration=(?<Duration>\w+)" | timechart span=1h avg(Duration) AS avg_response by msgTxt   I'd like to show the data like this in Splunk: Latitude       Longitude    WarningMessages 2.351           42.23           Error in blah 4.10             88.235          Hello world 454.2           50.02            Blah blah blah blah...............   Thank you
Hi Splunk Community team, Please help: I have N number of lookup lk_file_abc3477.csv, lk_file_xare000csv, lk_file_ppbc34ee.csv, etc.... files. I have a splunk search/script that will be proc... See more...
Hi Splunk Community team, Please help: I have N number of lookup lk_file_abc3477.csv, lk_file_xare000csv, lk_file_ppbc34ee.csv, etc.... files. I have a splunk search/script that will be processing the same data type and same number of columns and my question is, is there any way to process each file and send an email for each individually, using Reports or Alerts option or any other way in one single execution? Regards,
What problem are you trying to solve this way? If you want to adjust criticality of an alert depending on an asset affected - that's the functionality of Enterprise Security.
Well, it all depends on your utilization really. The rule of thumb is that a single indexer can handle up to 300GB/day if not running premium apps (ES or ITSI) or 100 GB/day if running ES or ITSI. Ac... See more...
Well, it all depends on your utilization really. The rule of thumb is that a single indexer can handle up to 300GB/day if not running premium apps (ES or ITSI) or 100 GB/day if running ES or ITSI. Actually a single indexer can index way way more daily if it doesn't do any searching. Since you're using ES there's probably gonna be a lot of searching (if not for any other reason, just for keeping datamodel summaries up to date). So one indexer per 200GB might be or not too small, depending on your actual load. You're pushing quite a lot of hardware for the indexers whereas normally you'd rather want to have more indexers than bigger ones. More CPUs mean you could add ingestion pipelines but - especially if reaching for cold data - you might starve your indexers from I/O performance since you will have potential for many concurrent searches competing for I/O resources. It's also not clear for me how is this NAS frozen spacd supposed to work. Is it a shared space or do you want to have dedicated share for each indexer? Remember that each indexer freezes buckets independently so unless you script it to keep the storage "tidy" you'll end ul with multiple copies of the same frozen bucket.
Your question is a bit skimpy on details but I assume that your event contains a string Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is currently down / unrea... See more...
Your question is a bit skimpy on details but I assume that your event contains a string Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is currently down / unreachable." somewhere within its contents. And I suspect you're using the value of a field Message which is (probably automatically) extracted from your event. And this field is "truncated". Most probably it's due to either (depending on how you look at it) badly/not defined extractions or badly formatted data. Splunk apparently uses key="value" format to find field(s) in your raw data. Since your value contains a quote, this quote delimits the value of the field. Depending on your data you might be able to define extraction catching the whole string if you can anchor the regex somewhere after that string. But as a general rule you should not have data containing unescaped delimiter.
Pulling CMDB data from SNOW is causing 10,000 errors per week and causing long SQL queries  in SNOW, and then timing out trying to query the CMDB table. This table is over 10 million records and cann... See more...
Pulling CMDB data from SNOW is causing 10,000 errors per week and causing long SQL queries  in SNOW, and then timing out trying to query the CMDB table. This table is over 10 million records and cannot be queried directly. Has anyone had this issue in the past? How did you fix it? What other alternatives are there?
@ITWhispererIt wasn't obvious at first glance for me either but if you scroll back "report_to_map_through_indexes" was actually a name of a saved search used in the solution. @PetermannAs you can se... See more...
@ITWhispererIt wasn't obvious at first glance for me either but if you scroll back "report_to_map_through_indexes" was actually a name of a saved search used in the solution. @PetermannAs you can see in the docs for the map command, it takes either a literal search as an argument or a name of a saved search. In this case @ejwade used the latter option. The map command references a report_to_map_through_indexes report definition of which is shown below in the original solution.
Raw message showing the correct filed value but stats & table truncating the field value. RAW meassge: Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is curren... See more...
Raw message showing the correct filed value but stats & table truncating the field value. RAW meassge: Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is currently down / unreachable." Table & Stats showing: Message=| RO76 | PXS (DTI) - Server - Windows Server Down Critical | Server it breaking after " sign.
Hi everyone, We're planning a new Splunk deployment and considering three different scenarios (Plan A and B) based on daily ingestion and data retention needs. I would appreciate it if you could rev... See more...
Hi everyone, We're planning a new Splunk deployment and considering three different scenarios (Plan A and B) based on daily ingestion and data retention needs. I would appreciate it if you could review the sizing and let me know if anything looks misaligned or could be optimized based on Splunk best practices. Overview of each plan: Plan A: Daily ingest: 2.0TB Retention: same 10 Indexers 3 Search Heads 2 ES Search Heads CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes Plan B: Daily ingest: 2.6TB Retention: same 13 Indexers 3 Search Heads 3 ES Search Heads CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes As I told Each plan includes CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes. Example specs per Indexer (Plan C): Memory: 128GB vCPU: 96 cores Disk: 500GB OS SSD + 6TB hot SSD + 30TB cold HDD + 11TB frozen (NAS) ---------------------------------------- What I'm looking for: Are these hardware specs reasonable per Splunk sizing guidelines? Is the number of indexers/search heads appropriate for the daily ingest and retention? Any red flags or over/under-sizing you would call out? Thanks in advance for your insights!
Hi @kn450 , Having the same issue, did you find a solution for this? Thank You!
Hello, I am setting up a test instance to be a license master and trying to connect a second splunk install to point to this license master.  All Splunk 9.4.1 Getting the error on the peer "this lic... See more...
Hello, I am setting up a test instance to be a license master and trying to connect a second splunk install to point to this license master.  All Splunk 9.4.1 Getting the error on the peer "this license does not support being a remote master".   I've installed a developer license and it shows 'can be remote', so not sure why I cannot connect a peer to it.  On the LM it lists 4 licenses and the 'dev' one is #2, do I need to change the license group to active the 'dev' license?    
Hi @danielbb  No, you can only use those items in the dropdown. If you try and "Advanced Edit" the alert to use a field you get a validation error: The only other thing you might be able to do ... See more...
Hi @danielbb  No, you can only use those items in the dropdown. If you try and "Advanced Edit" the alert to use a field you get a validation error: The only other thing you might be able to do is manually edit the savedsearches.conf and *try* using a field returned in there, however Your Mileage May Vary. This would also introduce management issues regarding the alert as it might make it impossible to edit in the UI - so whilst Im saying it might be possible, I wouldnt recommend it i'm afraid.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @danielbb , instead a scheduled report, use an alert that fires if results is greater than 0. Ciao. Giuseppe
Hi @danielbb , could you better describe your request? are you speaking of Splunk Enterprise or Enterprise Security? ciao. Giuseppe
Running version 9.3, the log-local.cfg doesn't seem to be applied. Even after a restart, Splunk is throwing >10 of these INFO lines per second. This message should probably be moved to the DEBU... See more...
Running version 9.3, the log-local.cfg doesn't seem to be applied. Even after a restart, Splunk is throwing >10 of these INFO lines per second. This message should probably be moved to the DEBUG category...    It is possible there's another issue with my instances, but this mess of logs is making it very hard to troubleshoot. `splunk set log-level TcpInputProc -level WARN`  does work Modifying log.cfg also works    
We would like to dynamically populate the severity field, is it possible?