All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  I am creating some reports to measure the uptime of hardware we have deployed, and I need a way to filter out multiple date/time ranges the match up with maintenance windows.  We are utilizin... See more...
Hello,  I am creating some reports to measure the uptime of hardware we have deployed, and I need a way to filter out multiple date/time ranges the match up with maintenance windows.  We are utilizing a Data Model and tstats as the logs span a year or more.   The (truncated) data I have is formatted as so: time range: Oct. 3rd - Oct 7th. |tstats summariesonly=true allow_old_summaries=true count(device.status) as count from datamodel=Devices.device where device.status!="" AND device.customer="*" AND device.device_id ="*" by device.customer, device.device_id, device.name, device.status _time device.customer device.device_id device.name device.status _time count ppt webOS-205AZXCA8162 Sao Paulo Office offline 2022-10-04 314 ppt webOS-205AZXCA8162 Sao Paulo Office offline 2022-10-05 782 ppt webOS-205AZXCA8162 Sao Paulo Office offline 2022-10-06 749 ppt webOS-205AZXCA8162 Sao Paulo Office offline 2022-10-07 1080 ppt webOS-205AZXCA8162 Sao Paulo Office online 2022-10-04 510 ppt webOS-205AZXCA8162 Sao Paulo Office online 2022-10-05 658 ppt webOS-205AZXCA8162 Sao Paulo Office online 2022-10-06 691 ppt webOS-205AZXCA8162 Sao Paulo Office online 2022-10-07 360 ppt webOS-205AZXCA8162 Sao Paulo Office warning 2022-10-04 1 ppt webOS-205AZXCA8162 Sao Paulo Office warning 2022-10-06 2 ppt webOS-205AZXCA8162 Sao Paulo Office warning 2022-10-07 1   As the reports will be run by other teams ad hoc, I was attempting to use a 'blacklist' lookup table to allow them to add the devices, time ranges, or device AND time range they wish to exclude from the results.  That lookup table is formatted as such: type start end deviceID note time 2022-10-03T13:10:30.000-04:00 2022-10-04T14:10:30.000-04:00   test range 10-04-2022 1:30 through 2:10 in EST UTC-4 device     12345   timedevice 2022-10-04T13:10:30.000-04:00 2022-10-05T14:10:30.000-04:00 webOS-205AZXCA8162   time 2022-10-06T13:10:30.000-04:00 2022-10-06T14:10:30.000-04:00   test range 10-06-2022 1:30 through 2:10 in EST UTC-4 device     webOS-205AZXCA8122     In my head, this works as a report they run on the total timeframe they wish to analyze, and then the devices, timeframes, and timeframe/device events are removed as entered on the lookup table. My biggest hang up right now is finding a way to exclude the unknown quantity of time or timedevice blacklist entries from the total list of results.    Thank you for any help you can provide!
I looked around quite a bit and could not find specificly what I am looking for.   I have a user that can create dashboards and reports, however upon creation or existing reports and dashboards h... See more...
I looked around quite a bit and could not find specificly what I am looking for.   I have a user that can create dashboards and reports, however upon creation or existing reports and dashboards he doesn't have the option to change perms like: private to  global or private -app- all apps  what capability in splunk roles have this options ?  
We are receiving logs from imap before but it suddenly stops indexing data. No recent changes was made on our end. Our architecture is Imap > Imap Mailbox > UF > Splunk Indexer. How can receive email... See more...
We are receiving logs from imap before but it suddenly stops indexing data. No recent changes was made on our end. Our architecture is Imap > Imap Mailbox > UF > Splunk Indexer. How can receive email again? We've checked the mailbox and found delivered emails. How can we also know the server name/host name for imap server? What are the files/permissions needs to check? Please help.
Scenario/Requirements: We have one eStreamer reporting from Firepower Management Console (FMC#1) to our Heavy Forwarder (HF#1) at HQ in Domain#1 We have another eStreamer reporting from FMC#2 to o... See more...
Scenario/Requirements: We have one eStreamer reporting from Firepower Management Console (FMC#1) to our Heavy Forwarder (HF#1) at HQ in Domain#1 We have another eStreamer reporting from FMC#2 to our HF#2 in another location in Domain#2. We want to redirect FMC#2 in Domain#2 to send eStreamer reporting to the HF#1 in Domain#1. Have each eStreamer instance sending to two separate indexes with each instance running at a different time.   If I understand the documentation correctly, I cannot run two instances of eStreamer at the same time - and have to schedule them at separate times. - How do I accomplish this? Also, I have been under the impression that I need to clone the TA-estreamer add-on to a different directory, and then update the indexes.conf and inputs.conf - but not sure on what else I would need to change. I would appreciate any help to get this working based on the scenario/requirements.
Hi Community,   We have a cluster setup for our Splunk install where all the data are indexed at the data layer (data from heavy forwarders, indexers, and even the _internal data from the search ... See more...
Hi Community,   We have a cluster setup for our Splunk install where all the data are indexed at the data layer (data from heavy forwarders, indexers, and even the _internal data from the search head). The current size of indexes in the Splunk Search head should be 1 MB but I notice that one of the indexes and a few internal indexes get data. This increases the size of the search head in addition to the increase in the size of the indexes in the indexers. When I check the last event received by the indexer in the search head, it shows 8 months ago in the GUI and also in the backend files. But when I check the same in any of the indexers, the last event was received recently. My doubt is can I delete the DB data files in the search head or are there some steps that I need to follow before I remove the DB files directly?   Regards, Pravin
I have a text box in a splunk dashboard and I'm trying to find out how I can separate values entered into the text box that are separated by commas with a OR clause. for example: values entered i... See more...
I have a text box in a splunk dashboard and I'm trying to find out how I can separate values entered into the text box that are separated by commas with a OR clause. for example: values entered into text box: 102.99.99, 103.99.93, 203.23.21 Where this search (index=abc sourcetype=abc src_ip="$ip$") would translate to:  index=abc sourcetype=abc src_ip="102.99.99 OR 103.99.93 OR 203.23.21" Any suggestions?  
I want to test if my ITSI kpi's are working as expected, im creating fake events, with collect, that should trigger the kpi. However the existing onboarding onboards data that avoids the trigger of t... See more...
I want to test if my ITSI kpi's are working as expected, im creating fake events, with collect, that should trigger the kpi. However the existing onboarding onboards data that avoids the trigger of the correct kpi state e.g. I onboard my data at minute 1, minute 2 the real data is onboarded, minute 5 the kpi base search runs. it takes the last state and sees everything is correct. How can i disable all data onboarding in an easy way for running tests?
I am getting fewer events when using rename command in splunk. ( Compared to the search where I haven't used rename). What could be the reason behind this?
Hello everyone! I am working on test environment where I only have one Splunk instance. I edited on the journal.zst file in one of my buckets (I have a buckup) just to test the data integrity, so n... See more...
Hello everyone! I am working on test environment where I only have one Splunk instance. I edited on the journal.zst file in one of my buckets (I have a buckup) just to test the data integrity, so now it's corrupted. my question is, is there a way to not lose all the events in this bucket ? I tried fsck and rebuild but the bucket is still corrupted with data integrity check being unsuccessful which is normal. I'm not sure if it's possible in the real world to face such an issue, but m curious to know what would the best strategy be. any help would be appreciated  
How many duplicated events we have? Percent of duplicated events? Difference between duplicated and unique events.?
Inter join is not displaying any results.   the search works however, nothing is showing up on the screen index = tenable | rename hostnames as host.name | table host.name | join type=inner host.na... See more...
Inter join is not displaying any results.   the search works however, nothing is showing up on the screen index = tenable | rename hostnames as host.name | table host.name | join type=inner host.name [search (index=assetpanda) | fields host.name] | table host.name
i wanted to search for the websites/urls that people visited today and for a particular user.  i tried this but I didnt get any result.  any suggestion please?    index"*" sourcetype="WinEventl... See more...
i wanted to search for the websites/urls that people visited today and for a particular user.  i tried this but I didnt get any result.  any suggestion please?    index"*" sourcetype="WinEventlog:security" user="*"   Also, the login and and logout times on Active directory 
I'm trying to combine two simular values from the same field. and rename the values. I would like to combine  /v1/product and /v1/product/ and rename it Product API Search String: | stats ... See more...
I'm trying to combine two simular values from the same field. and rename the values. I would like to combine  /v1/product and /v1/product/ and rename it Product API Search String: | stats count by urlPthTxt I did try a few different  commands but didn't work. Please help. Thanks
I am making a test in python where I want to validate if an ITSI kpi works as expected. Lets say I have an index called alerts And I want the following data in the index, because it should trigge... See more...
I am making a test in python where I want to validate if an ITSI kpi works as expected. Lets say I have an index called alerts And I want the following data in the index, because it should trigger an alert {"alert":"true", "time":"1666702756"} I know there is a splunk event gen application but it feels to big to add a simple line, what is the most simple way to add an event to an index? For example is it possible to do with an api call? I tried looking around but could not find a good example, for something that feels very trivial.  Note: we can not use the splunk python sdk as we use a custom proxy/url and the python sdk does not support any custom urls. We are able to run queries with our own python script so if it is possible with a spl query that is fine to.    
I have seen several posts asking similar questions but I am not that much of a UI guy so they do not make sense. I have a Dashboard Studio, currently using a single value radial widget to display a... See more...
I have seen several posts asking similar questions but I am not that much of a UI guy so they do not make sense. I have a Dashboard Studio, currently using a single value radial widget to display a Yes or No based upon a query.  I can post the query if its helpful but dont think it matters as the query just returns the string 'Yes' or 'No' If the query result is Yes I want the widgets background Green, if No would like to display Red. I am not comitted to a single value radial its just what I was able to get working. Any suggestions on how to do this using a single value radial or a suggestion on a different widget to use and then how to change the background colors ? {     "type": "viz.singlevalueradial",     "title": "Non-Cycle Delivery Met",     "dataSources": {         "primary": "ds_3T2iIKSr"     },     "encoding": {},     "options": {         "backgroundColor": "#ffffff"     },     "showLastUpdated": true,     "context": {},     "showProgressBar": false }
I need a Splunk management app to monitor all splunk servers for out-of-date, out-of-sync (running different versions) apps. Does anyone know which I one I can use?
Made changes to search/metadata/local.meta that need deployed to search heads. search/local/app.conf contains [shclustering] deployer_push_mode = local_only When I stage/send from the shc deplo... See more...
Made changes to search/metadata/local.meta that need deployed to search heads. search/local/app.conf contains [shclustering] deployer_push_mode = local_only When I stage/send from the shc deployer,  after a rolling restart on the  sh captain search/metadata/local.meta remains the same as before the push. Does the SHC deploy metadata/local.meta? Splunk Enterprise 8.2.3 running on Redhat Linux. 8 Node Search Head Cluster. Permission changes made to  From: [] access = read : [ * ], write : [ admin, power ] export = system version = 8.2.3.2 modtime = 1666399466.315512000 To: [] access = read : [ admin, number_of_roles_here, user, user_ad_user ], write : [ admin, power ] owner = admin export = none version = 8.2.3.2 modtime = 1666381189.171483000
We recently moved our windows event log service up to Windows 2016 and Splunk 9.0.1 and all Security Auditing events are coming through with the message   Message=Splunk could not get the descripti... See more...
We recently moved our windows event log service up to Windows 2016 and Splunk 9.0.1 and all Security Auditing events are coming through with the message   Message=Splunk could not get the description for this event. Either the component that raises this event is not installed on your local computer or the installation is corrupt.   The Event data looks like this: the data is present but not the usual field descriptions that allows Splunk to work out the structure.  There are many posts, they all date from over 2 years ago, and all refer back to a master post from 2014, (https://community.splunk.com/t5/Getting-Data-In/quot-FormatMessage-error-quot-appears-in-indexed-message-for/m-p/139982#M28765 ) that doesn’t appear to be for current versions of Windows. I have however followed the broad advice in here: Checked the registry keys – they match the old server Started Splunk after the event log service (I tried stopping and starting Splunk on a running host to mimic). Confirmed that the event format is set to Events. HF is Splunk 9.0.1 / Windows 2016 version 1607 Build 14393.5427 / Splunk Cloud is Version:9.0.2208.3
I have a search that a user recently moved from every hour to every 10 minutes. Cron:    3-59/10 * * * *    The search takes ~2 minutes to run. The window is set to auto. BUT, I see the issue: ... See more...
I have a search that a user recently moved from every hour to every 10 minutes. Cron:    3-59/10 * * * *    The search takes ~2 minutes to run. The window is set to auto. BUT, I see the issue:   10-25-2022 06:13:00.633 +0000 INFO SavedSplunker - savedsearch_id="nobody;rcc; Pull: Pull Domain IOCs from MISP", search_type="scheduled", user="thatOneGuyCausingProblemsForMe", app="myApp", savedsearch_name="Pull: Pull Domain IOCs from MISP", priority=default, status=skipped, reason="The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached", concurrency_category="historical_scheduled", concurrency_context="saved-search_cluster-wide", concurrency_limit=1, scheduled_time=1666678380, window_time=-1, skipped_count=1, filtered_count=0     We have 4 very similar searches (similar schedule, duration, window, etc), all with the same error. The error fires off very consistently.   Splunk's complaint is that the given search is trying to run when another instance of the same search is running. But the searches only take ~2 minutes, and there is 10 minutes between them. I understand I can go into the limits.conf and change concurrency, but I do not see how these searches are overlapping themselves? I don't want to just hide the problem behind more CPUs
Hello All,  we have a default database:internal sourcetype for a application using DBConnect to send data to Splunk and I want to extract couple of fields from the incoming data from splunk side, ... See more...
Hello All,  we have a default database:internal sourcetype for a application using DBConnect to send data to Splunk and I want to extract couple of fields from the incoming data from splunk side, So, I was wondering If I use a custom sourcetype(database:internal:xxxx:zzzz), should I use(extract) the attributes from the data, "EXTRACT-xxxx = xxxx" in the props.conf or will the intended fields be extracted by default when we change to custom sourcetype? Any explanation would be really appreciated.  Thanks.