All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @dionrivera  Modify the data input configuration within the Splunk Add-on for ServiceNow to apply filters to the CMDB data collection. Instead of querying the entire table, specify criteria to re... See more...
Hi @dionrivera  Modify the data input configuration within the Splunk Add-on for ServiceNow to apply filters to the CMDB data collection. Instead of querying the entire table, specify criteria to retrieve only the necessary subset of records. If you need to, create multiple inputs each with their own filtering criteria. Use ServiceNow's encoded query syntax within the "Filter parameters" field of the CMDB input configuration in the Splunk Add-on. For example, to pull only active Linux servers: sys_class_name=cmdb_ci_linux_server^operational_status=1 Querying a very large table (10 million+ records) without filters often leads to performance degradation and timeouts in ServiceNow. By applying specific filters in the Splunk add-on's input configuration, you significantly reduce the amount of data ServiceNow needs to process and return, thereby avoiding long-running SQL queries and associated errors. Work with your ServiceNow administrator to identify the most efficient filters and ensure appropriate database indexes exist on the ServiceNow side for the fields used in your filter (e.g., sys_class_name, operational_status, sys_updated_on). Test your encoded query directly within ServiceNow's table list view first to validate its correctness and performance before configuring it in the Splunk add-on. Consider incremental fetching by filtering on sys_updated_on to only pull records that have changed since the last poll, rather than repeatedly pulling static data. Splunk Add-on for ServiceNow Documentation: https://docs.splunk.com/Documentation/AddOns/latest/ServiceNow/ConfigureInputs ServiceNow Filtering docs: https://www.servicenow.com/docs/bundle/xanadu-platform-user-interface/page/use/common-ui-elements/reference/r_OpAvailableFiltersQueries.html Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @OscarAlva  How have you been contacting sales? There is a list of regional contacts/contact methods available at https://www.splunk.com/en_us/about-splunk/contact-us.html#sales  Did this answ... See more...
Hi @OscarAlva  How have you been contacting sales? There is a list of regional contacts/contact methods available at https://www.splunk.com/en_us/about-splunk/contact-us.html#sales  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Cheng2Ready  If you apply multiple field extractions then the one with the highest precedence will be used, instead you may wish to manually modify the regular expression to cover both events.  ... See more...
Hi @Cheng2Ready  If you apply multiple field extractions then the one with the highest precedence will be used, instead you may wish to manually modify the regular expression to cover both events.  When extracting the fields using the field extractor wizard, on the "Select fields" step, select the "Show regular expression" text as below: This then allows you to click "Edit regular expression" button on the right, and clicking this gives you the regex which you can override. At this point you should define a regex that matches all the relevant events.  If you need help creating the regex please post raw examples/samples of the events and I'd be happy to help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
When using the Field Extractor can you use the same name for a field? will it append or add to the original field create Example I am extracting from the _raw data Found that some of the _raw da... See more...
When using the Field Extractor can you use the same name for a field? will it append or add to the original field create Example I am extracting from the _raw data Found that some of the _raw data didn't match when I highlight when using regex match I was getting the red x as the example below, that should of captured it since both logs are identical in patterns. So I extracted twice on a single field on two data sets. will it append? And add it onto the field of data to look for?  
I'm Attempting to speak with someone in sales. I cant seem to get ahold of anyone. Anyone have any tips to help expedite this?
1) Root cause.   It appears that this can happen when enableSched is set to "1" or "true",  but the set of actual alerting properties is somehow invalid.     For example if the disabled alert has a... See more...
1) Root cause.   It appears that this can happen when enableSched is set to "1" or "true",  but the set of actual alerting properties is somehow invalid.     For example if the disabled alert has action.email = 1 but specifies no value for action.email.to,  then the green "enable" button will quietly fail for all users, even admins.  It posts nothing to the backend and displays no message to the user. 2) Workaround - you can go to "Edit > Advanced  Edit",  and then scroll way down to find "is_scheduled".  Change this from "true" to "false" and submit.  Now you will be able to "enable" the savedsearch.  And then when you click "edit schedule", you'll be able to re-enable scheduling and then the UI will tell you what required keys aren't populated yet. (For App Developers - there are valid reasons to ship a disabled alert,  with a specific cron schedule that is somehow tied to the SPL for instance.     I believe another workaround would be to specify "example@example.com"  as the action.email.to key.  This may seem strange but the "example.com" domain is, according to RFC 2606 and RFC 6761 a reserved domain that is only for documentations and examples.)
Posting this in case other folks run into it.    It's possible for an app to ship an alert disabled,  in such a way that when any user tries to enable it via going to manager and selecting "Edit ... See more...
Posting this in case other folks run into it.    It's possible for an app to ship an alert disabled,  in such a way that when any user tries to enable it via going to manager and selecting "Edit > Enable",   it doesn't work. Instead of enabling the alert, nothing happens at all.   You click the green button and nothing happens. Looking at the browser console,  there are no errors when this happens and the javascript makes no attempt to post anything at all to Splunk.   The question has two parts.   -- what is the root cause of this,  and how can folks avoid accidentally shipping app content like this? -- what workaround might exist for the end users who need to enable the disabled alert?  
Hello, I have this Splunk log that contains tons of quotes, commas, and other special characters. I’m trying to only pull the Latitude":77.0999, Longitude":-99.999 and from time to time there will b... See more...
Hello, I have this Splunk log that contains tons of quotes, commas, and other special characters. I’m trying to only pull the Latitude":77.0999, Longitude":-99.999 and from time to time there will be WarningMessages: This mail requires a number or Apartment number that I would like to capture in a dashboard. StandardizedAddres SUCCEEDED - FROM: {"Address1":"123 NAANNA SAND RD","Address2":"","City":”GREEN","County":null,"State":"WY","ZipCode":"44444-9360","Latitude":null,"Longitude":null,"IsStandardized":true,"AddressStatus":1,"AddressStandardizationType":0} RESULT: 1 | {"AddressDetails":[{"AssociatedName":"","HouseNumber":"123","Predirection":"","StreetName":" NAANNA SAND RD ","Suffix":"RD","Postdirection":"","SuiteName":"","SuiteRange":"","City":" GREEN","CityAbbreviation":"GREEN","State":"WY","ZipCode":"44444","Zip4":"9360","County":"Warren","CountyFips":"27","CoastalCounty":0,"Latitude":77.0999,"Longitude":-99.999,"Fulladdress1":"123 NAANNA SAND RD ","Fulladdress2":"","HighRiseDefault":false}]," WarningMessages":["This mail requires a number or Apartment number."]:[],"ErrorMessages":[],"GeoErrorMessages":[],"Succeeded":true,"ErrorMessage":null}   I currently use the query below, but I’m not having any luck. This is past my skill set, please help…. index="cf" Environment="NA" msgTxt="API=/api-123BusOwnCommon/notis*" | eval msgTxt=" API=/api-123BusOwnCommon/notis /WGR97304666665/05-08-2024 CalStatus=Success Controller=InsideApi_ notis Action= notis Duration=3 data*" | rex "Duration=(?<Duration>\w+)" | timechart span=1h avg(Duration) AS avg_response by msgTxt   I'd like to show the data like this in Splunk: Latitude       Longitude    WarningMessages 2.351           42.23           Error in blah 4.10             88.235          Hello world 454.2           50.02            Blah blah blah blah...............   Thank you
Hi Splunk Community team, Please help: I have N number of lookup lk_file_abc3477.csv, lk_file_xare000csv, lk_file_ppbc34ee.csv, etc.... files. I have a splunk search/script that will be proc... See more...
Hi Splunk Community team, Please help: I have N number of lookup lk_file_abc3477.csv, lk_file_xare000csv, lk_file_ppbc34ee.csv, etc.... files. I have a splunk search/script that will be processing the same data type and same number of columns and my question is, is there any way to process each file and send an email for each individually, using Reports or Alerts option or any other way in one single execution? Regards,
What problem are you trying to solve this way? If you want to adjust criticality of an alert depending on an asset affected - that's the functionality of Enterprise Security.
Well, it all depends on your utilization really. The rule of thumb is that a single indexer can handle up to 300GB/day if not running premium apps (ES or ITSI) or 100 GB/day if running ES or ITSI. Ac... See more...
Well, it all depends on your utilization really. The rule of thumb is that a single indexer can handle up to 300GB/day if not running premium apps (ES or ITSI) or 100 GB/day if running ES or ITSI. Actually a single indexer can index way way more daily if it doesn't do any searching. Since you're using ES there's probably gonna be a lot of searching (if not for any other reason, just for keeping datamodel summaries up to date). So one indexer per 200GB might be or not too small, depending on your actual load. You're pushing quite a lot of hardware for the indexers whereas normally you'd rather want to have more indexers than bigger ones. More CPUs mean you could add ingestion pipelines but - especially if reaching for cold data - you might starve your indexers from I/O performance since you will have potential for many concurrent searches competing for I/O resources. It's also not clear for me how is this NAS frozen spacd supposed to work. Is it a shared space or do you want to have dedicated share for each indexer? Remember that each indexer freezes buckets independently so unless you script it to keep the storage "tidy" you'll end ul with multiple copies of the same frozen bucket.
Your question is a bit skimpy on details but I assume that your event contains a string Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is currently down / unrea... See more...
Your question is a bit skimpy on details but I assume that your event contains a string Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is currently down / unreachable." somewhere within its contents. And I suspect you're using the value of a field Message which is (probably automatically) extracted from your event. And this field is "truncated". Most probably it's due to either (depending on how you look at it) badly/not defined extractions or badly formatted data. Splunk apparently uses key="value" format to find field(s) in your raw data. Since your value contains a quote, this quote delimits the value of the field. Depending on your data you might be able to define extraction catching the whole string if you can anchor the regex somewhere after that string. But as a general rule you should not have data containing unescaped delimiter.
Pulling CMDB data from SNOW is causing 10,000 errors per week and causing long SQL queries  in SNOW, and then timing out trying to query the CMDB table. This table is over 10 million records and cann... See more...
Pulling CMDB data from SNOW is causing 10,000 errors per week and causing long SQL queries  in SNOW, and then timing out trying to query the CMDB table. This table is over 10 million records and cannot be queried directly. Has anyone had this issue in the past? How did you fix it? What other alternatives are there?
@ITWhispererIt wasn't obvious at first glance for me either but if you scroll back "report_to_map_through_indexes" was actually a name of a saved search used in the solution. @PetermannAs you can se... See more...
@ITWhispererIt wasn't obvious at first glance for me either but if you scroll back "report_to_map_through_indexes" was actually a name of a saved search used in the solution. @PetermannAs you can see in the docs for the map command, it takes either a literal search as an argument or a name of a saved search. In this case @ejwade used the latter option. The map command references a report_to_map_through_indexes report definition of which is shown below in the original solution.
Raw message showing the correct filed value but stats & table truncating the field value. RAW meassge: Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is curren... See more...
Raw message showing the correct filed value but stats & table truncating the field value. RAW meassge: Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is currently down / unreachable." Table & Stats showing: Message=| RO76 | PXS (DTI) - Server - Windows Server Down Critical | Server it breaking after " sign.
Hi everyone, We're planning a new Splunk deployment and considering three different scenarios (Plan A and B) based on daily ingestion and data retention needs. I would appreciate it if you could rev... See more...
Hi everyone, We're planning a new Splunk deployment and considering three different scenarios (Plan A and B) based on daily ingestion and data retention needs. I would appreciate it if you could review the sizing and let me know if anything looks misaligned or could be optimized based on Splunk best practices. Overview of each plan: Plan A: Daily ingest: 2.0TB Retention: same 10 Indexers 3 Search Heads 2 ES Search Heads CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes Plan B: Daily ingest: 2.6TB Retention: same 13 Indexers 3 Search Heads 3 ES Search Heads CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes As I told Each plan includes CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes. Example specs per Indexer (Plan C): Memory: 128GB vCPU: 96 cores Disk: 500GB OS SSD + 6TB hot SSD + 30TB cold HDD + 11TB frozen (NAS) ---------------------------------------- What I'm looking for: Are these hardware specs reasonable per Splunk sizing guidelines? Is the number of indexers/search heads appropriate for the daily ingest and retention? Any red flags or over/under-sizing you would call out? Thanks in advance for your insights!
Hi @kn450 , Having the same issue, did you find a solution for this? Thank You!
Hello, I am setting up a test instance to be a license master and trying to connect a second splunk install to point to this license master.  All Splunk 9.4.1 Getting the error on the peer "this lic... See more...
Hello, I am setting up a test instance to be a license master and trying to connect a second splunk install to point to this license master.  All Splunk 9.4.1 Getting the error on the peer "this license does not support being a remote master".   I've installed a developer license and it shows 'can be remote', so not sure why I cannot connect a peer to it.  On the LM it lists 4 licenses and the 'dev' one is #2, do I need to change the license group to active the 'dev' license?    
Hi @danielbb  No, you can only use those items in the dropdown. If you try and "Advanced Edit" the alert to use a field you get a validation error: The only other thing you might be able to do ... See more...
Hi @danielbb  No, you can only use those items in the dropdown. If you try and "Advanced Edit" the alert to use a field you get a validation error: The only other thing you might be able to do is manually edit the savedsearches.conf and *try* using a field returned in there, however Your Mileage May Vary. This would also introduce management issues regarding the alert as it might make it impossible to edit in the UI - so whilst Im saying it might be possible, I wouldnt recommend it i'm afraid.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @danielbb , instead a scheduled report, use an alert that fires if results is greater than 0. Ciao. Giuseppe