All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I tired this method but it's giving me servers that is  monitored @ITWhisperer 
Hi All  Problem description: Search Head physical memory utilization increasing 2% per day Instance deployment: Running Splunk Enterprise Splunk version 9.0.3 using 2 Search Heads un-clustered wi... See more...
Hi All  Problem description: Search Head physical memory utilization increasing 2% per day Instance deployment: Running Splunk Enterprise Splunk version 9.0.3 using 2 Search Heads un-clustered with the main SH with this issue has allocated 48 CPU Cores | Physical Mem 32097 MB | Search Concurrency 10 | CPU usage 2% | Memory usage 57% | Linux 8.7 It is used to search across a cluster of 6 indexers. I've had Splunk look into it who reported this could be due to an internal bug fixed in 9.0.7 and 9.1.2(Jira SPL-241171 ). The actual bug fix is by the following Jira: SPL-228226: SummarizationHandler::handleList() calls getSavedSearches for all users which use a lot of memory, causing OOM at Progressive A workaround to change the limits.conf in the form of do_not_use_summaries = true did not fix the issue. splunkd server process seem to be the main component increasing it's memory usage over time. Splunk process restart seems to lower and restart the memory usage but trending upwards at a slow rate.   If anyone could share a similar experience so we can validate the Splunk support solution of upgrading to 9.1.2 based on the symptoms described above it would be appreciated. Thanks    
Hi @VK18 , If the notable is new and not being worked on by analysts/concerned team then you would find empty results. Check for notable that has been changed from New to other status such as 'In pr... See more...
Hi @VK18 , If the notable is new and not being worked on by analysts/concerned team then you would find empty results. Check for notable that has been changed from New to other status such as 'In progress' or closed. You should be able to find the history. ----- Srikanth Yarlagadda    
Hi Team, We're encountering a problem with the Incident Review History tab in Splunk ES. Clicking on Incident Review, then a specific notable (like 'Tunnelling Via DNS'), followed by History and cli... See more...
Hi Team, We're encountering a problem with the Incident Review History tab in Splunk ES. Clicking on Incident Review, then a specific notable (like 'Tunnelling Via DNS'), followed by History and clicking 'View all review activity for this Notable Event', results in an empty history being displayed for all the notables. Any leads on this would be highly appreciated. Note : Recently, we have upgraded to Splunk ES to 7.1.2 from 7.0.0 Regards VK18 
Thanks for the tag. Yeah there is some confusion with the latest release because we added the ability to set default api key as well as others and they way we implemented it is causing the issue.  ... See more...
Thanks for the tag. Yeah there is some confusion with the latest release because we added the ability to set default api key as well as others and they way we implemented it is causing the issue.  Basically we need users to enter a default ORG and default KEY but wanted to give them the option to add others as well and yeah I need to rework the setup page. The screenshots under installation section on GitHub should help clarify https://github.com/bentleymi/ChatGPT-4-Splunk  
The _raw field is where Splunk stores the raw event.  Many commands default to that field and a few work only on that field.  The spath command defaults to _raw, but you can use spath input=_raw, if ... See more...
The _raw field is where Splunk stores the raw event.  Many commands default to that field and a few work only on that field.  The spath command defaults to _raw, but you can use spath input=_raw, if you wish. The example event looks fine to me and passes checks at jsonlint.com.
If the column is $X_Furniture, then change the rename to | rename "$X_Furniture" as host You should be able to see what the subsearch returns by just running it on its own. You can add the  | form... See more...
If the column is $X_Furniture, then change the rename to | rename "$X_Furniture" as host You should be able to see what the subsearch returns by just running it on its own. You can add the  | format to the end of the search if you run it standalone, i.e. | inputlookup HouseInventory.csv | where Room="Bathroom" | rename "$X_Furniture" as host | appendpipe [ | stats count | where count=0 ``` Add in what you want the default to be ``` | eval host="No such Host" ] | format and you can see how that acts as a constraint to the main outer search. You still haven't said how you want your timechart should look like when the Room is not found - are you showing the timechart as a graph visualisation or simply as a table?
That error implies that your OpenAPI Org Id was not configured properly during the setup of the ChatGPT 4 Splunk app.  This app expects to find this info within Splunk's built-in password storag... See more...
That error implies that your OpenAPI Org Id was not configured properly during the setup of the ChatGPT 4 Splunk app.  This app expects to find this info within Splunk's built-in password storage (where it was stored during the setup of this TA) Here's the python code behind the search command: ChatGPT-4-Splunk/TA-openai-api/bin/openai.py at main · bentleymi/ChatGPT-4-Splunk · GitHub   Also, @jkat54 is the author of that particular app.  Just tagging them here if they have any additional suggestions.  See also their session at .conf23.
Let's assume your sourcetype is called WindowsEventSourcetype, then you will want to add some lines to that sourcetype's definition in props.conf and transforms.conf:   props.conf [WindowsEventSou... See more...
Let's assume your sourcetype is called WindowsEventSourcetype, then you will want to add some lines to that sourcetype's definition in props.conf and transforms.conf:   props.conf [WindowsEventSourcetype] TRANSFORMS-t1=eliminate-4624-4634-3 transforms.conf [eliminate-4624-4634-3] REGEX=(?m)EventCode\s*=\s*(4624|4634).*?Type\s*=\s*3\s DEST_KEY=queue FORMAT=nullQueue  A couple things to note: These configurations need to be deployed to where your data is "cooked" by Splunk, not searched or the UF.  So this means these should be going to Heavy Forwarder(s) and Indexer(s) in your environment that would be ingesting this windows event log data. I might be slightly off on the regular expression - I can't recall the exact format of the logs.  If you could post a couple samples I could tighten this up.  Right now the regex is doing something like, "Use multiline mode, look for EventCodes 4624 or 4634, then some more stuff, then Type 3" - I don't recall how the Login_type is labeled within these particular events from Windows.
  This is a good resource page:  Splunk Cloud Platform Service Details - Splunk Documentation - Is there a message informing the license is about to expire? You can view information about your lic... See more...
  This is a good resource page:  Splunk Cloud Platform Service Details - Splunk Documentation - Is there a message informing the license is about to expire? You can view information about your license entitlements.  Here's the docs that explain that for Splunk Cloud. - After the expiration date, is there any grace period provided? Assume no, but open a support ticket and work with your account manager on this situation. - In case I decide to not renew the license, are we able somehow to download the company data before its total removal? Or after my license is expired I lose all indexed data? Per the page under Data Handling, retrieving your data is suggested to be a Splunk Professional Services engagement, which often can mean "this is kinda hard if you're new to Splunk."   If you require your ingested data to be moved into your control before the termination of your subscription, this is accomplished through a Splunk Professional Services engagement. Some data can be moved into your control by enabling Dynamic Data Self-Storage to export your aged data to your Amazon S3 or Google Cloud Storage account in the same region There are data egress notes on that page, too, and this page details getting your data out of your Splunk Cloud envirnment using Dynamic Data Self Storage (DDSS).  You could technically "age out" your data to your own S3 buckets, for example. This also could be a good discussion for the #splunk_cloud channel on the splunk-usergroups.slack.com workspace.
That particular error is referring to a hiccup during the execution of the search with the search peers - aka the Splunk Indexers - that were involved in your query.   Here is a diagram of a simple ... See more...
That particular error is referring to a hiccup during the execution of the search with the search peers - aka the Splunk Indexers - that were involved in your query.   Here is a diagram of a simple Splunk enterprise deployment - you were initiating your query on a Search Head, and that query is shared out to the Indexer(s) that make up your deployment.  It sounds like one of the Indexers had an issue: To find more info on what your issue is, you can open your search results, and click on the Inspect Job option under the Job menu on the result page:     And then open the search.log from the Job Inspector page that pops up:     Within that log file that opens you can find more details on what happened during your search to trigger the warning message "Search results might be incomplete" that you saw. If you have more info on any of the error/warning logs in that file we could help you figure out what could be causing your issue.        
When I apply ingest actions and I specify host field and put in the IP address, it works fine but when I try to use _raw and for instance; filter on Teardown ICMP connection , it shows the affected... See more...
When I apply ingest actions and I specify host field and put in the IP address, it works fine but when I try to use _raw and for instance; filter on Teardown ICMP connection , it shows the affected events but when I check hours or days later, it still ingests the messages filtered by using the _raw as the field.    
I tried this and still it lists the same results. (Everything is still listed), Also "$X_Furniture" is a column in the csv file as well so the "$" is also needed.      index=House sourcetype=Livin... See more...
I tried this and still it lists the same results. (Everything is still listed), Also "$X_Furniture" is a column in the csv file as well so the "$" is also needed.      index=House sourcetype=LivingRoom [ | inputlookup HouseInventory.csv | where Room="Bathroom" | rename X_Furniture as host | appendpipe [ | stats count | where count=0 ``` Add in what you want the default to be ``` | eval host="No such Host" ] ] | timechart span=5m count by host    
I'm using current Cloud Splunk: It appears the older "Splunk Add-on for AWS" can stream in Cloudwatch log-group data through Inputs > Custom Data Type > Cloudwatch Logs. This asks for a comma separa... See more...
I'm using current Cloud Splunk: It appears the older "Splunk Add-on for AWS" can stream in Cloudwatch log-group data through Inputs > Custom Data Type > Cloudwatch Logs. This asks for a comma separated log-groups to feed of of and presumably setups up ingest for them. Data Manager has a Cloudwatch Logs section,  but it appears to only cover AWS Cloudtrail AWS Security Hub Amazon Guard Duty IAM Access Analyzer IAM Credential support Metadata (EC2, IAM, Network ACLs, EC2 sec groups) Am I just missing something in Data Manager, does it support ingesting Cloudwatch log-groups? Should I use "Splunk Add-On for AWS"? Should forgo both and instead use the splunk log driver with the container tasks as per https://repost.aws/knowledge-center/ecs-task-fargate-splunk-log-driver (posted a year ago) Thank you!
You can't put two datasets into a single pie chart when split by service_name as you have 2 lots of 100% (errors and success). You can show this as a trellis view, which will then show two pie chart... See more...
You can't put two datasets into a single pie chart when split by service_name as you have 2 lots of 100% (errors and success). You can show this as a trellis view, which will then show two pie charts, one for success and the other for errors.  
Unless you need a CSV, I would suggest using Splunk's indexes to summarise data. It is more flexible to get data out of the index than a CSV, but you are on the right track. Write yourself a search ... See more...
Unless you need a CSV, I would suggest using Splunk's indexes to summarise data. It is more flexible to get data out of the index than a CSV, but you are on the right track. Write yourself a search that collects data for an interval that summarises it in a way you would want to save. Typically this may run daily or hourly and the saved search has a 'summary indexing' option, so you can tell Splunk to write it to a summary index. You will need the index to exist, but it's a simple option to enable.  Searches (Reports) can be scheduled, so if you want to run it daily, you could schedule it to run after midnight each day and then use a time range of 'yesterday' for its search.  
The span of a timechart is controlled with the syntax | timechart span=1h count your example allows timechart to choose its own span based on the data volume. You can format _time after the timech... See more...
The span of a timechart is controlled with the syntax | timechart span=1h count your example allows timechart to choose its own span based on the data volume. You can format _time after the timechart | eval _time=strftime(_time, "%H:%M:%S") Note that if you do that, you will not be able to show that on a timechart, as _time is no longer a _time field in Splunk.  
Thanks for your reply. It seems that the approach that I need to utilise for this is to use a savedsearch to periodically populate a csv lookup table and then have a dashboard to search against the ... See more...
Thanks for your reply. It seems that the approach that I need to utilise for this is to use a savedsearch to periodically populate a csv lookup table and then have a dashboard to search against the table which contains the historic data. Now sure exactly how to achieve this at a this stage.
I suspect there are a couple of things going on. What is your <drilldown> logic in the XML for picking the start and end data for the drilldown search. If it's not giving you a 7 day range then it s... See more...
I suspect there are a couple of things going on. What is your <drilldown> logic in the XML for picking the start and end data for the drilldown search. If it's not giving you a 7 day range then it seems likely there's an issue there. Secondly, your primary search is doing dedup case_id. If your drilldown search is ALSO doing dedup case_id but on a shorter time range, then it's possible that case ids from a date outside the drilldown range that have been deduped are now being counted, i.e. consider case_id="ABC123" (26 November and also 22 November). When you dedup on 19-25 November the ABC123 is still counted for 22 November, but when you search 19-27 November, the ABC123 is FIRST found on 26 November, so the count of ABC123 from 22nd November is now removed due to the dedup.  
You can either use the prestats option as @richgalloway suggests, or the alternative way is to use count in tstats, then sum(count) in timechart, i.e. | tstats count where index IN (index1, index2, ... See more...
You can either use the prestats option as @richgalloway suggests, or the alternative way is to use count in tstats, then sum(count) in timechart, i.e. | tstats count where index IN (index1, index2, index3) by _time , host | where match(host,"^.*.device.mycompany.com$") | timechart sum(count) by host