All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

If I do an index search, raw events are listed in reverse _time order, which is often also the reverse _indextime order so I don't exactly know which.  But if I table the results, the table is no lon... See more...
If I do an index search, raw events are listed in reverse _time order, which is often also the reverse _indextime order so I don't exactly know which.  But if I table the results, the table is no longer in this order.  Why is it so? I used the following to inspect table   sourcetype=sometype | eval indextime=strftime(_indextime, "%F %T") | table _time indextime   The table kind of list later entries first, but not consistent, often swapped by hours.
Hello all, I'd like to compare events in the same log files, amusing the format of the events are the same. For example: event1: ccc, ddd event2: bbb, ccc event3: aaa, bbb As you can see there's... See more...
Hello all, I'd like to compare events in the same log files, amusing the format of the events are the same. For example: event1: ccc, ddd event2: bbb, ccc event3: aaa, bbb As you can see there's a pattern, the 2nd part (bbb) in event3 is always the same as the 1st part in event2, and the 2nd part in event 2 (ccc) is always the same as the 1st part in event 1. My question is how do I check if all the events in the same log file match this pattern. Thank you in advance! Sincerely, Gai
Hey everyone, I am trying to gauge at what time users are active on our app. I want to use data from (All time) to gather the average on a 24 hour scale. Is there a way for I can see the average tim... See more...
Hey everyone, I am trying to gauge at what time users are active on our app. I want to use data from (All time) to gather the average on a 24 hour scale. Is there a way for I can see the average time by hour. Right now this just shows the times when users login. It would be super useful for I can know how many users on average use the app by X AM/PM. My current query is:  index=app1 AND service=app AND logLevel=INFO AND environment=prod "message.eventAction"=START_SESSION |timechart span=1h count This query can gather the users by hour on a 24 hour scale, but not the average from (All time). If anyone could help, it would be greatly appreciated!
I want to create a 30 day index of data that changes it's indexed timestamp as each day passes. Therefore the data will always show up when I do a last 30 day search and don't need to pick out the sp... See more...
I want to create a 30 day index of data that changes it's indexed timestamp as each day passes. Therefore the data will always show up when I do a last 30 day search and don't need to pick out the specific 30 days I saved. Ie if I started with January data, in June 1st the original data from January should represent the month of May. Is there any way to change the time of the data in the index every day? Or does it have to be deleted from index and readded? 
Hello all, I have a simple dashboard with a dropdown under the title. When I add styles into the title, the dropdown input element interferes with that and the full hight of the title panel is not vi... See more...
Hello all, I have a simple dashboard with a dropdown under the title. When I add styles into the title, the dropdown input element interferes with that and the full hight of the title panel is not visible.  This is my current inline style content. I want to display the full hight of my title panel. Can anyone help?     <label>Endpoint Configurations Summary Dashboard</label> <row depends="$alwaysHideCSSPanel$"> <panel> <html> <style> .dashboard-panel h2{ background:#6495ED !important; color:white !important; text-align: center !important; font-weight: bold !important; border-top-right-radius: 15px; border-top-left-radius: 15px; } .highcharts-background { fill: #ffffff !important; } .highcharts-grid-line{ fill: #ffffff !important; } h1 { background:#6495ED !important; color:white !important; text-align: center !important; font-weight: bold !important; border-top-right-radius: 15px; border-top-left-radius: 15px; } h2, h3, p { color: #696969 !important; text-align: center !important; } </style> </html> </panel> </row>      
I want to export the result of a Splunk dashboard and authentication would be via SSO/SAML. I can provide the username and password and Splunk dashboard URL so that python will export the dashboard p... See more...
I want to export the result of a Splunk dashboard and authentication would be via SSO/SAML. I can provide the username and password and Splunk dashboard URL so that python will export the dashboard pannel and save the exported result in csv.
Hello, I have a search that runs in the web application interface (Splunk Enterprise). It returns results as and when log events are present within the search parameters (time window). I execute ... See more...
Hello, I have a search that runs in the web application interface (Splunk Enterprise). It returns results as and when log events are present within the search parameters (time window). I execute the exact same search at the same time via the REST API using Postman, it completes (Job status="DONE") but with zero available events or any events at all. Why might that happen? The search is copied and pasted from the web app to the API call in Postman. On occasion, it has worked but maybe one in a thousand calls will fetch results. Thank you.
I'm attempting to build a search around Okta authentication logs.  I want to run a query to check for any Multi factor update/change, collect the user ID and pass that to another search where I see t... See more...
I'm attempting to build a search around Okta authentication logs.  I want to run a query to check for any Multi factor update/change, collect the user ID and pass that to another search where I see the geolocation data where the User has authenticated previously over a specific time span.  Essentially, I'm trying to build a search to see if a user that requested an MFA change is doing it from a different geolocation than they normally authenticate from. The query below shows all users that have have a MFA change with their corresponding geolocation data.  Is there a way to pass the user ID(s) to a different search where I can look at 7 days worth of their authentication activity to see if the geolocation matches?  I've researched sub-searches but that doesn't work because I need the user ID first but the subsearch runs first and I don't have the user ID yet.  I looked at map which seems like it's the best solution, but there a lot of warnings about it being resource intensive.  If anyone can point me in the right direction, it would be very much appreciated.     index=okta eventType="user.mfa.factor.update" | stats values(actor.id), values(client.geographicalContext.State)      
Hello, How would I initialize monitor command to pull the data/files from variable paths /locations? Some examples along with monitor command provided below: Paths/Locations: /RTAM/PROD_LOGS/PR... See more...
Hello, How would I initialize monitor command to pull the data/files from variable paths /locations? Some examples along with monitor command provided below: Paths/Locations: /RTAM/PROD_LOGS/PROD_DATA/2021-01-28_03-39-15/AUDITDATA/APPS/AUDITPROD.txt /RTAM/PROD_LOGS/PROD_DATA/2021-01-29_09-12-12/AUDITDATA/APPS/AUDITPROD.txt ......... .......... ......... /RTAM/PROD_LOGS/PROD_DATA/2021-02-02_06-19-10/AUDITDATA/APPS/AUDITPROD.txt /RTAM/PROD_LOGS/PROD_DATA/2021-02-02_08-07-14/AUDITDATA/APPS/AUDITPROD.txt Monitor Command I wrote: [monitor:// /RTAM/PROD_LOGS/PROD_DATA/.../AUDITDATA/APPS/AUDITPROD.txt] is this going to work to pull the data/files from all of the locations mentioned above? Any help/feedback will be highly appreciated. Thank you.
Hello Splunk Community, How can I move the addtotals field to display as the first column and not last for this chart?  Currently:  _time Host123 Host456 total  2022-02-24 22:00 0 2... See more...
Hello Splunk Community, How can I move the addtotals field to display as the first column and not last for this chart?  Currently:  _time Host123 Host456 total  2022-02-24 22:00 0 2 2 Would like: _time total Host123 Host456 2022-02-24 22:00 2 0 2 Current Code: index="Dept_data_idx" eventType="Created" status="success" host=* | bucket _time span=1h | stats count by _time host | addtotals
I'm trying to create a calculated field (eval) that will coalesce a bunch of username fields, then perform match() and replace() functions within a case statement. Here's a scenario: Possible u... See more...
I'm trying to create a calculated field (eval) that will coalesce a bunch of username fields, then perform match() and replace() functions within a case statement. Here's a scenario: Possible user fields: UserName, username, User_ID User values need domain removed (e.g., "user@domain.com" or "ad\user" needs to be "user"). Here is how it can be done in two evals (I newlined and indented each case for readability):   | eval user_coalesced = coalesce(UserName, username, User_ID) | eval user = case( match(user_coalesced, ".*@.*"), replace(user_coalesced, "@.*", ""), match(user_coalesced, "^ad\\\\"), replace(user_coalesced, "^ad\\\\", ""), true(), user )    Any ideas on how I can get this down to one? I thought about putting the coalesce() into each case, but that seems inefficient.
I have a dashboard that is based on a scheduled report, the report is schedule to run at 06:00 every day and every day the job shows as done with success status however they is nothing in the report.... See more...
I have a dashboard that is based on a scheduled report, the report is schedule to run at 06:00 every day and every day the job shows as done with success status however they is nothing in the report. When I run the report manually it takes 1 hour for the report to complete with lot of search result (events) however when scheduled it’s show “Done” after 1 hour (sometime couple of minutes) with an empty report (0 events) Why is the report not generation result, can you help troubleshoot the problem?
We have lots of firewalls (both internal and internet facing) feeding into our CIM Network_Traffic Model within Enterprise Security. I would like to be able to distinguish the traffic that comes from... See more...
We have lots of firewalls (both internal and internet facing) feeding into our CIM Network_Traffic Model within Enterprise Security. I would like to be able to distinguish the traffic that comes from the internet with other traffic. One way that occurred to be is to modify the CIM Network_Traffic model to have an extra "inheritance" (alongside Allowed_Traffic and Blocked_Traffic). Something line Internet_Traffic with the constraint specifying the appropriate dvc and src_interface values.  Is this a good idea? Would it break anything? How would it work w.r.t. update/upgrades to the CIM model?
Using Splunk Cloud and management made the decision to send from UF's straight to Splunk Cloud indexers.  As such, have run into a number of issues with various TA's not deployed to Cloud indexers.  ... See more...
Using Splunk Cloud and management made the decision to send from UF's straight to Splunk Cloud indexers.  As such, have run into a number of issues with various TA's not deployed to Cloud indexers.  How can I generate a list of deployed aps/TA's that are on the Cloud indexers?
Hi, I'm trying to route data to a specific index based on a value in a field. I have a series of data that look like this:   Mar 1 16:26:52 xxx.xxx.xxx.xxx Mar 01 2022 16:26:52 hostname : %F... See more...
Hi, I'm trying to route data to a specific index based on a value in a field. I have a series of data that look like this:   Mar 1 16:26:52 xxx.xxx.xxx.xxx Mar 01 2022 16:26:52 hostname : %FTD-6-113008: AAA transaction status ACCEPT : user = username Mar 1 17:42:18 xxx.xxx.xxx.xxx Mar 01 2022 17:42:18 hostname : %ASA-6-611101: User authentication succeeded: IP address: xxx.xxx.xxx.xxx, Uname: username   My props.conf on indexer looks like this:   [cisco:asa] TRANSFORMS-01_index = force_index_asa_audit   My transforms.conf on indexer looks like this:   [force_index_asa_audit] DEST_KEY = _MetaData:Index REGEX =(?:ASA|FTD)-\d+-(?:113008|113012|113004|113005|611101|605005|713166|713167|713185|716038|716039|713198|502103|111008|111010) FORMAT = asa_audit   But unfortunatly nothing happens. I've tryed also using source in props.conf with no successful  result. Do you have any idea? Thank a lot Marta
Hi, Indexer cluster peers status are fluctuating few of the times from up to pending. I have verified, there is no resources shortage problems (memory/cpu) when the indexer cluster peers fluctuate... See more...
Hi, Indexer cluster peers status are fluctuating few of the times from up to pending. I have verified, there is no resources shortage problems (memory/cpu) when the indexer cluster peers fluctuated. what could be the reason for fluctuating indexer peers?   splunk/Health.logs are showing below messages.   01-02-2022 02:00:11.893 +0000 INFO PeriodicHealthReporter - feature="Indexers" color=yellow indicator="missing_peers" due_to_threshold_value=1 measured_value=3 reason="The following peers are in transition: Indexer1(Pending), Indexer2(Pending), Indexer3(Pending). " node_type=indicator node_path=splunkd.indexer_clustering.indexers.missing_peers   Source watchdog/watchdog.log are showing below messages   01-02-2022 02:15:19.01 +0000 ERROR Watchdog - No response received from IMonitoredThread=0x7fe9c63f70 within 8000 ms. Looks like thread name='CMMasterRemoteStorageThread' tid=28158 is busy !? Starting to trace with 8000 ms interval. 01-02-2022 02:16:23.12 +0000 INFO Watchdog - Stopping trace. Response for IMonitoredThread ptr=0x7fefb70 - thread name='CMMasterRemoteStorageThread' tid=28158 - finally received after 72049 ms (estimation only).  
How do I create a search that would display: The time, user, hostname, and URL those a list of users are visiting.
Windows disk performance latencies ("Avg. Disk sec/Transfer", etc.) are given in seconds. Splunk_TA_windows renders these for the CIM as: [Perfmon:LogicalDisk] EVAL-latency = if(counter=="Avg. Disk... See more...
Windows disk performance latencies ("Avg. Disk sec/Transfer", etc.) are given in seconds. Splunk_TA_windows renders these for the CIM as: [Perfmon:LogicalDisk] EVAL-latency = if(counter=="Avg. Disk sec/Transfer",Value*1000,null()) EVAL-read_latency = if(counter=="Avg. Disk sec/Read",Value,null()) EVAL-write_latency = if(counter=="Avg. Disk sec/Write",Value,null()) Why is one given as milliseconds, but the others as seconds?
Hello everybody, I am upgrading Splunk Enterprise from 7.3.X to 8.2.5 (Windows). Due to the compatibility, I also need a more recent Windows version on my hosts to support Splunk. Therefore, I'm gon... See more...
Hello everybody, I am upgrading Splunk Enterprise from 7.3.X to 8.2.5 (Windows). Due to the compatibility, I also need a more recent Windows version on my hosts to support Splunk. Therefore, I'm gonna use a new host for each server. The architecture includes: - 1 cluster master - 1 deployment servers - 1 search head - 2 indexers (cluster) - 1 poller (heavy forwarder) - n universal forwarders I've found HERE how to migrate a Splunk Enterprise instance from one physical machine to another, can anybody confirm me the following procedure? - Stop Splunk Enterprise services on the host from which I want to migrate - Roll any hot buckets on the source host from hot to warm - Copy the entire contents of the $SPLUNK_HOME directory and all the directories containing buckets from the old host to the new one - Turn off the old host - Configure the new host in order to have the same IP address and hostname of the old host. This avoid not to redirect forwarders to the new instance - Install Splunk Enterprise 7.3.X on the new host - Verify that the index configuration (indexes.conf) file's volume, sizing, and path settings are still valid on the new host. - Start Splunk Enterprise on the new instance. - Log into Splunk Enterprise and confirm that your data is intact by searching it - Upgrade from 7.3.X to 8.1.X and then to 8.2.5 Should I apply these steps to every host? What about the two indexers? I'm gonna need to migrate data, what's the correct procedure? Also, I'm afraid that the new installation would reingest data from the poller, should I do something to prevent it? Last thing: I'm gonna probably need to change the IP of one indexer, when should I change its configurations?   Thanks in advance for any help.
I can't seem to figure this out. I've read every thread on here as well as Splunk docs relating to this. The spl output looks like I want it to, but on a dashboard everything is blue. I've added fiel... See more...
I can't seem to figure this out. I've read every thread on here as well as Splunk docs relating to this. The spl output looks like I want it to, but on a dashboard everything is blue. I've added fieldColors to my source, but still can't get it to work. What am I missing? Attachment provided.     index=health_checks dev=false | stats avg(eval(round(uptime_minutes*100,0))) as uptime, avg(eval(round(month_minutes*100,0))) as month, by customer | eval score=round(uptime/month*100,0) | eval range=case(score < 75, "severely degraded", score >= 75 AND score < 95, "slightly degraded", score >= 95, "healthy") | stats count(score) as stacks by range           <option name="charting.fieldColors">{"healthy": 0x008000, "slightly degraded": 0xFFFF00, "severely degraded": 0xFF0000, "NULL": 0xC4C4C0}</option>