All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If the column is $X_Furniture, then change the rename to | rename "$X_Furniture" as host You should be able to see what the subsearch returns by just running it on its own. You can add the  | form... See more...
If the column is $X_Furniture, then change the rename to | rename "$X_Furniture" as host You should be able to see what the subsearch returns by just running it on its own. You can add the  | format to the end of the search if you run it standalone, i.e. | inputlookup HouseInventory.csv | where Room="Bathroom" | rename "$X_Furniture" as host | appendpipe [ | stats count | where count=0 ``` Add in what you want the default to be ``` | eval host="No such Host" ] | format and you can see how that acts as a constraint to the main outer search. You still haven't said how you want your timechart should look like when the Room is not found - are you showing the timechart as a graph visualisation or simply as a table?
That error implies that your OpenAPI Org Id was not configured properly during the setup of the ChatGPT 4 Splunk app.  This app expects to find this info within Splunk's built-in password storag... See more...
That error implies that your OpenAPI Org Id was not configured properly during the setup of the ChatGPT 4 Splunk app.  This app expects to find this info within Splunk's built-in password storage (where it was stored during the setup of this TA) Here's the python code behind the search command: ChatGPT-4-Splunk/TA-openai-api/bin/openai.py at main · bentleymi/ChatGPT-4-Splunk · GitHub   Also, @jkat54 is the author of that particular app.  Just tagging them here if they have any additional suggestions.  See also their session at .conf23.
Let's assume your sourcetype is called WindowsEventSourcetype, then you will want to add some lines to that sourcetype's definition in props.conf and transforms.conf:   props.conf [WindowsEventSou... See more...
Let's assume your sourcetype is called WindowsEventSourcetype, then you will want to add some lines to that sourcetype's definition in props.conf and transforms.conf:   props.conf [WindowsEventSourcetype] TRANSFORMS-t1=eliminate-4624-4634-3 transforms.conf [eliminate-4624-4634-3] REGEX=(?m)EventCode\s*=\s*(4624|4634).*?Type\s*=\s*3\s DEST_KEY=queue FORMAT=nullQueue  A couple things to note: These configurations need to be deployed to where your data is "cooked" by Splunk, not searched or the UF.  So this means these should be going to Heavy Forwarder(s) and Indexer(s) in your environment that would be ingesting this windows event log data. I might be slightly off on the regular expression - I can't recall the exact format of the logs.  If you could post a couple samples I could tighten this up.  Right now the regex is doing something like, "Use multiline mode, look for EventCodes 4624 or 4634, then some more stuff, then Type 3" - I don't recall how the Login_type is labeled within these particular events from Windows.
  This is a good resource page:  Splunk Cloud Platform Service Details - Splunk Documentation - Is there a message informing the license is about to expire? You can view information about your lic... See more...
  This is a good resource page:  Splunk Cloud Platform Service Details - Splunk Documentation - Is there a message informing the license is about to expire? You can view information about your license entitlements.  Here's the docs that explain that for Splunk Cloud. - After the expiration date, is there any grace period provided? Assume no, but open a support ticket and work with your account manager on this situation. - In case I decide to not renew the license, are we able somehow to download the company data before its total removal? Or after my license is expired I lose all indexed data? Per the page under Data Handling, retrieving your data is suggested to be a Splunk Professional Services engagement, which often can mean "this is kinda hard if you're new to Splunk."   If you require your ingested data to be moved into your control before the termination of your subscription, this is accomplished through a Splunk Professional Services engagement. Some data can be moved into your control by enabling Dynamic Data Self-Storage to export your aged data to your Amazon S3 or Google Cloud Storage account in the same region There are data egress notes on that page, too, and this page details getting your data out of your Splunk Cloud envirnment using Dynamic Data Self Storage (DDSS).  You could technically "age out" your data to your own S3 buckets, for example. This also could be a good discussion for the #splunk_cloud channel on the splunk-usergroups.slack.com workspace.
That particular error is referring to a hiccup during the execution of the search with the search peers - aka the Splunk Indexers - that were involved in your query.   Here is a diagram of a simple ... See more...
That particular error is referring to a hiccup during the execution of the search with the search peers - aka the Splunk Indexers - that were involved in your query.   Here is a diagram of a simple Splunk enterprise deployment - you were initiating your query on a Search Head, and that query is shared out to the Indexer(s) that make up your deployment.  It sounds like one of the Indexers had an issue: To find more info on what your issue is, you can open your search results, and click on the Inspect Job option under the Job menu on the result page:     And then open the search.log from the Job Inspector page that pops up:     Within that log file that opens you can find more details on what happened during your search to trigger the warning message "Search results might be incomplete" that you saw. If you have more info on any of the error/warning logs in that file we could help you figure out what could be causing your issue.        
When I apply ingest actions and I specify host field and put in the IP address, it works fine but when I try to use _raw and for instance; filter on Teardown ICMP connection , it shows the affected... See more...
When I apply ingest actions and I specify host field and put in the IP address, it works fine but when I try to use _raw and for instance; filter on Teardown ICMP connection , it shows the affected events but when I check hours or days later, it still ingests the messages filtered by using the _raw as the field.    
I tried this and still it lists the same results. (Everything is still listed), Also "$X_Furniture" is a column in the csv file as well so the "$" is also needed.      index=House sourcetype=Livin... See more...
I tried this and still it lists the same results. (Everything is still listed), Also "$X_Furniture" is a column in the csv file as well so the "$" is also needed.      index=House sourcetype=LivingRoom [ | inputlookup HouseInventory.csv | where Room="Bathroom" | rename X_Furniture as host | appendpipe [ | stats count | where count=0 ``` Add in what you want the default to be ``` | eval host="No such Host" ] ] | timechart span=5m count by host    
I'm using current Cloud Splunk: It appears the older "Splunk Add-on for AWS" can stream in Cloudwatch log-group data through Inputs > Custom Data Type > Cloudwatch Logs. This asks for a comma separa... See more...
I'm using current Cloud Splunk: It appears the older "Splunk Add-on for AWS" can stream in Cloudwatch log-group data through Inputs > Custom Data Type > Cloudwatch Logs. This asks for a comma separated log-groups to feed of of and presumably setups up ingest for them. Data Manager has a Cloudwatch Logs section,  but it appears to only cover AWS Cloudtrail AWS Security Hub Amazon Guard Duty IAM Access Analyzer IAM Credential support Metadata (EC2, IAM, Network ACLs, EC2 sec groups) Am I just missing something in Data Manager, does it support ingesting Cloudwatch log-groups? Should I use "Splunk Add-On for AWS"? Should forgo both and instead use the splunk log driver with the container tasks as per https://repost.aws/knowledge-center/ecs-task-fargate-splunk-log-driver (posted a year ago) Thank you!
You can't put two datasets into a single pie chart when split by service_name as you have 2 lots of 100% (errors and success). You can show this as a trellis view, which will then show two pie chart... See more...
You can't put two datasets into a single pie chart when split by service_name as you have 2 lots of 100% (errors and success). You can show this as a trellis view, which will then show two pie charts, one for success and the other for errors.  
Unless you need a CSV, I would suggest using Splunk's indexes to summarise data. It is more flexible to get data out of the index than a CSV, but you are on the right track. Write yourself a search ... See more...
Unless you need a CSV, I would suggest using Splunk's indexes to summarise data. It is more flexible to get data out of the index than a CSV, but you are on the right track. Write yourself a search that collects data for an interval that summarises it in a way you would want to save. Typically this may run daily or hourly and the saved search has a 'summary indexing' option, so you can tell Splunk to write it to a summary index. You will need the index to exist, but it's a simple option to enable.  Searches (Reports) can be scheduled, so if you want to run it daily, you could schedule it to run after midnight each day and then use a time range of 'yesterday' for its search.  
The span of a timechart is controlled with the syntax | timechart span=1h count your example allows timechart to choose its own span based on the data volume. You can format _time after the timech... See more...
The span of a timechart is controlled with the syntax | timechart span=1h count your example allows timechart to choose its own span based on the data volume. You can format _time after the timechart | eval _time=strftime(_time, "%H:%M:%S") Note that if you do that, you will not be able to show that on a timechart, as _time is no longer a _time field in Splunk.  
Thanks for your reply. It seems that the approach that I need to utilise for this is to use a savedsearch to periodically populate a csv lookup table and then have a dashboard to search against the ... See more...
Thanks for your reply. It seems that the approach that I need to utilise for this is to use a savedsearch to periodically populate a csv lookup table and then have a dashboard to search against the table which contains the historic data. Now sure exactly how to achieve this at a this stage.
I suspect there are a couple of things going on. What is your <drilldown> logic in the XML for picking the start and end data for the drilldown search. If it's not giving you a 7 day range then it s... See more...
I suspect there are a couple of things going on. What is your <drilldown> logic in the XML for picking the start and end data for the drilldown search. If it's not giving you a 7 day range then it seems likely there's an issue there. Secondly, your primary search is doing dedup case_id. If your drilldown search is ALSO doing dedup case_id but on a shorter time range, then it's possible that case ids from a date outside the drilldown range that have been deduped are now being counted, i.e. consider case_id="ABC123" (26 November and also 22 November). When you dedup on 19-25 November the ABC123 is still counted for 22 November, but when you search 19-27 November, the ABC123 is FIRST found on 26 November, so the count of ABC123 from 22nd November is now removed due to the dedup.  
You can either use the prestats option as @richgalloway suggests, or the alternative way is to use count in tstats, then sum(count) in timechart, i.e. | tstats count where index IN (index1, index2, ... See more...
You can either use the prestats option as @richgalloway suggests, or the alternative way is to use count in tstats, then sum(count) in timechart, i.e. | tstats count where index IN (index1, index2, index3) by _time , host | where match(host,"^.*.device.mycompany.com$") | timechart sum(count) by host  
Just as an aside on the use of map, note that it is not a practical command for use on large datasets, as each map result gets executed in its own serial search, so it can take time and depending on ... See more...
Just as an aside on the use of map, note that it is not a practical command for use on large datasets, as each map result gets executed in its own serial search, so it can take time and depending on the search can cause a lot of overhead to iterate through large result sets.  Often there is an alternative way to write the search (but not always). Depends on the use case.
I'm trying to have a timechart showing the count of events by a category grouped by week. The search time is controlled by a radio button on the dashboard with options from 1w - 12 weeks with the end... See more...
I'm trying to have a timechart showing the count of events by a category grouped by week. The search time is controlled by a radio button on the dashboard with options from 1w - 12 weeks with the end date set to @w. I then have a drilldown that shows a table with more info about each event for that category in that time range. mysearch .... | dedup case_id | timechart span=1w count by case_category The chart looks fine but when I click on certain sections to load the drilldown, much more data appears than was suggested by the count in the timechart. For instance, looking at Nov 19-25, in the timechart it shows 26 events, but when I go to the drilldown it shows 61. When I open the drilldown search in Search, the issue seems to involve expanding the time range beyond one week. If I change the range from Nov 19-25 to Nov 19-27, the data from Nov 22-24 is either erased or reduced. Nov 19-25 stats count results: Nov 19: null Nov 20: 8 Nov 21: 14 Nov 22: 19 ** Nov 23: 20 ** Nov 24: 1 ** Nov 25: null Nov 19-28 stats count results: Nov 19: null Nov 20: 8 Nov 21: 14 Nov 22: 5 ** Nov 23: null ** Nov 24: null ** Nov 25: null Nov 26: null Nov 27: 35 Nov 28: 1
Thank you for that, @dhatch ! This feature should be defaulted to "true" in $SPLUNK_HOME/etc/system/default/server.conf, starting with Splunk 9.1.x. In previous versions, according to the web.conf.... See more...
Thank you for that, @dhatch ! This feature should be defaulted to "true" in $SPLUNK_HOME/etc/system/default/server.conf, starting with Splunk 9.1.x. In previous versions, according to the web.conf.spec file, this attribute is not set at all. Whenever making changes to .conf files in Splunk, please edit files from $SPLUNK_HOME/etc/system/local. If you do not yet have a web.conf file in that path when making such a change, create it, then include only the stanza(s) and attribute(s) you wish to modify. Settings configured via $SPLUNK_HOME/etc/system/local take precedence over all other configuration files in all Splunk instances EXCEPT for clustered indexers. In the case of clustered indexers, the apps deployed to indexers by the Cluster Manager take precedence even over $SPLUNK_HOME/etc/system/local.  See: Splunk documentation on precedence This useful community article  
Hi @aditsss ... if any reply solved your query, could you pls accept it as a solution..  karma points / upvotes are appreciated, thanks. 
Hi @abi2023 ...Please check if the Splunk Service running fine
Greetings Community! I have a question regarding the Splunk Cloud License (classic), particularly when the license expires.  - Is there a message informing the license is about to expire? - After ... See more...
Greetings Community! I have a question regarding the Splunk Cloud License (classic), particularly when the license expires.  - Is there a message informing the license is about to expire? - After the expiration date, is there any grace period provided? - In case I decide to not renew the license, are we able somehow to download the company data before its total removal? Or after my license is expired I lose all indexed data? Thanks in advance any information on this matter. Kind Regards, Marcelo