All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, How would I initialize monitor command to pull the data/files from variable paths /locations? Some examples along with monitor command provided below: Paths/Locations: /RTAM/PROD_LOGS/PR... See more...
Hello, How would I initialize monitor command to pull the data/files from variable paths /locations? Some examples along with monitor command provided below: Paths/Locations: /RTAM/PROD_LOGS/PROD_DATA/2021-01-28_03-39-15/AUDITDATA/APPS/AUDITPROD.txt /RTAM/PROD_LOGS/PROD_DATA/2021-01-29_09-12-12/AUDITDATA/APPS/AUDITPROD.txt ......... .......... ......... /RTAM/PROD_LOGS/PROD_DATA/2021-02-02_06-19-10/AUDITDATA/APPS/AUDITPROD.txt /RTAM/PROD_LOGS/PROD_DATA/2021-02-02_08-07-14/AUDITDATA/APPS/AUDITPROD.txt Monitor Command I wrote: [monitor:// /RTAM/PROD_LOGS/PROD_DATA/.../AUDITDATA/APPS/AUDITPROD.txt] is this going to work to pull the data/files from all of the locations mentioned above? Any help/feedback will be highly appreciated. Thank you.
Hello Splunk Community, How can I move the addtotals field to display as the first column and not last for this chart?  Currently:  _time Host123 Host456 total  2022-02-24 22:00 0 2... See more...
Hello Splunk Community, How can I move the addtotals field to display as the first column and not last for this chart?  Currently:  _time Host123 Host456 total  2022-02-24 22:00 0 2 2 Would like: _time total Host123 Host456 2022-02-24 22:00 2 0 2 Current Code: index="Dept_data_idx" eventType="Created" status="success" host=* | bucket _time span=1h | stats count by _time host | addtotals
I'm trying to create a calculated field (eval) that will coalesce a bunch of username fields, then perform match() and replace() functions within a case statement. Here's a scenario: Possible u... See more...
I'm trying to create a calculated field (eval) that will coalesce a bunch of username fields, then perform match() and replace() functions within a case statement. Here's a scenario: Possible user fields: UserName, username, User_ID User values need domain removed (e.g., "user@domain.com" or "ad\user" needs to be "user"). Here is how it can be done in two evals (I newlined and indented each case for readability):   | eval user_coalesced = coalesce(UserName, username, User_ID) | eval user = case( match(user_coalesced, ".*@.*"), replace(user_coalesced, "@.*", ""), match(user_coalesced, "^ad\\\\"), replace(user_coalesced, "^ad\\\\", ""), true(), user )    Any ideas on how I can get this down to one? I thought about putting the coalesce() into each case, but that seems inefficient.
I have a dashboard that is based on a scheduled report, the report is schedule to run at 06:00 every day and every day the job shows as done with success status however they is nothing in the report.... See more...
I have a dashboard that is based on a scheduled report, the report is schedule to run at 06:00 every day and every day the job shows as done with success status however they is nothing in the report. When I run the report manually it takes 1 hour for the report to complete with lot of search result (events) however when scheduled it’s show “Done” after 1 hour (sometime couple of minutes) with an empty report (0 events) Why is the report not generation result, can you help troubleshoot the problem?
We have lots of firewalls (both internal and internet facing) feeding into our CIM Network_Traffic Model within Enterprise Security. I would like to be able to distinguish the traffic that comes from... See more...
We have lots of firewalls (both internal and internet facing) feeding into our CIM Network_Traffic Model within Enterprise Security. I would like to be able to distinguish the traffic that comes from the internet with other traffic. One way that occurred to be is to modify the CIM Network_Traffic model to have an extra "inheritance" (alongside Allowed_Traffic and Blocked_Traffic). Something line Internet_Traffic with the constraint specifying the appropriate dvc and src_interface values.  Is this a good idea? Would it break anything? How would it work w.r.t. update/upgrades to the CIM model?
Using Splunk Cloud and management made the decision to send from UF's straight to Splunk Cloud indexers.  As such, have run into a number of issues with various TA's not deployed to Cloud indexers.  ... See more...
Using Splunk Cloud and management made the decision to send from UF's straight to Splunk Cloud indexers.  As such, have run into a number of issues with various TA's not deployed to Cloud indexers.  How can I generate a list of deployed aps/TA's that are on the Cloud indexers?
Hi, I'm trying to route data to a specific index based on a value in a field. I have a series of data that look like this:   Mar 1 16:26:52 xxx.xxx.xxx.xxx Mar 01 2022 16:26:52 hostname : %F... See more...
Hi, I'm trying to route data to a specific index based on a value in a field. I have a series of data that look like this:   Mar 1 16:26:52 xxx.xxx.xxx.xxx Mar 01 2022 16:26:52 hostname : %FTD-6-113008: AAA transaction status ACCEPT : user = username Mar 1 17:42:18 xxx.xxx.xxx.xxx Mar 01 2022 17:42:18 hostname : %ASA-6-611101: User authentication succeeded: IP address: xxx.xxx.xxx.xxx, Uname: username   My props.conf on indexer looks like this:   [cisco:asa] TRANSFORMS-01_index = force_index_asa_audit   My transforms.conf on indexer looks like this:   [force_index_asa_audit] DEST_KEY = _MetaData:Index REGEX =(?:ASA|FTD)-\d+-(?:113008|113012|113004|113005|611101|605005|713166|713167|713185|716038|716039|713198|502103|111008|111010) FORMAT = asa_audit   But unfortunatly nothing happens. I've tryed also using source in props.conf with no successful  result. Do you have any idea? Thank a lot Marta
Hi, Indexer cluster peers status are fluctuating few of the times from up to pending. I have verified, there is no resources shortage problems (memory/cpu) when the indexer cluster peers fluctuate... See more...
Hi, Indexer cluster peers status are fluctuating few of the times from up to pending. I have verified, there is no resources shortage problems (memory/cpu) when the indexer cluster peers fluctuated. what could be the reason for fluctuating indexer peers?   splunk/Health.logs are showing below messages.   01-02-2022 02:00:11.893 +0000 INFO PeriodicHealthReporter - feature="Indexers" color=yellow indicator="missing_peers" due_to_threshold_value=1 measured_value=3 reason="The following peers are in transition: Indexer1(Pending), Indexer2(Pending), Indexer3(Pending). " node_type=indicator node_path=splunkd.indexer_clustering.indexers.missing_peers   Source watchdog/watchdog.log are showing below messages   01-02-2022 02:15:19.01 +0000 ERROR Watchdog - No response received from IMonitoredThread=0x7fe9c63f70 within 8000 ms. Looks like thread name='CMMasterRemoteStorageThread' tid=28158 is busy !? Starting to trace with 8000 ms interval. 01-02-2022 02:16:23.12 +0000 INFO Watchdog - Stopping trace. Response for IMonitoredThread ptr=0x7fefb70 - thread name='CMMasterRemoteStorageThread' tid=28158 - finally received after 72049 ms (estimation only).  
How do I create a search that would display: The time, user, hostname, and URL those a list of users are visiting.
Windows disk performance latencies ("Avg. Disk sec/Transfer", etc.) are given in seconds. Splunk_TA_windows renders these for the CIM as: [Perfmon:LogicalDisk] EVAL-latency = if(counter=="Avg. Disk... See more...
Windows disk performance latencies ("Avg. Disk sec/Transfer", etc.) are given in seconds. Splunk_TA_windows renders these for the CIM as: [Perfmon:LogicalDisk] EVAL-latency = if(counter=="Avg. Disk sec/Transfer",Value*1000,null()) EVAL-read_latency = if(counter=="Avg. Disk sec/Read",Value,null()) EVAL-write_latency = if(counter=="Avg. Disk sec/Write",Value,null()) Why is one given as milliseconds, but the others as seconds?
Hello everybody, I am upgrading Splunk Enterprise from 7.3.X to 8.2.5 (Windows). Due to the compatibility, I also need a more recent Windows version on my hosts to support Splunk. Therefore, I'm gon... See more...
Hello everybody, I am upgrading Splunk Enterprise from 7.3.X to 8.2.5 (Windows). Due to the compatibility, I also need a more recent Windows version on my hosts to support Splunk. Therefore, I'm gonna use a new host for each server. The architecture includes: - 1 cluster master - 1 deployment servers - 1 search head - 2 indexers (cluster) - 1 poller (heavy forwarder) - n universal forwarders I've found HERE how to migrate a Splunk Enterprise instance from one physical machine to another, can anybody confirm me the following procedure? - Stop Splunk Enterprise services on the host from which I want to migrate - Roll any hot buckets on the source host from hot to warm - Copy the entire contents of the $SPLUNK_HOME directory and all the directories containing buckets from the old host to the new one - Turn off the old host - Configure the new host in order to have the same IP address and hostname of the old host. This avoid not to redirect forwarders to the new instance - Install Splunk Enterprise 7.3.X on the new host - Verify that the index configuration (indexes.conf) file's volume, sizing, and path settings are still valid on the new host. - Start Splunk Enterprise on the new instance. - Log into Splunk Enterprise and confirm that your data is intact by searching it - Upgrade from 7.3.X to 8.1.X and then to 8.2.5 Should I apply these steps to every host? What about the two indexers? I'm gonna need to migrate data, what's the correct procedure? Also, I'm afraid that the new installation would reingest data from the poller, should I do something to prevent it? Last thing: I'm gonna probably need to change the IP of one indexer, when should I change its configurations?   Thanks in advance for any help.
I can't seem to figure this out. I've read every thread on here as well as Splunk docs relating to this. The spl output looks like I want it to, but on a dashboard everything is blue. I've added fiel... See more...
I can't seem to figure this out. I've read every thread on here as well as Splunk docs relating to this. The spl output looks like I want it to, but on a dashboard everything is blue. I've added fieldColors to my source, but still can't get it to work. What am I missing? Attachment provided.     index=health_checks dev=false | stats avg(eval(round(uptime_minutes*100,0))) as uptime, avg(eval(round(month_minutes*100,0))) as month, by customer | eval score=round(uptime/month*100,0) | eval range=case(score < 75, "severely degraded", score >= 75 AND score < 95, "slightly degraded", score >= 95, "healthy") | stats count(score) as stacks by range           <option name="charting.fieldColors">{"healthy": 0x008000, "slightly degraded": 0xFFFF00, "severely degraded": 0xFF0000, "NULL": 0xC4C4C0}</option>      
OS: RHEL 7 Splunk Version: 8.0.4   Hi, I have a problem that recently popped up after upgrading from Splunk 8.0.4 to 8.2.1. In version 8.0.4, Splunk would generate our report and send out email... See more...
OS: RHEL 7 Splunk Version: 8.0.4   Hi, I have a problem that recently popped up after upgrading from Splunk 8.0.4 to 8.2.1. In version 8.0.4, Splunk would generate our report and send out emails with no problems. Now we are receiving this error message in the Search & Reporting app. The error states: "Error in lookup command: Script execution failed for external search command   '/<Splunk_Home>/etc/apps/TA-user-agents/bin/user_agents.py' I've checked the permissions on the user_agents.py file and it looks to be correct.   Any help or suggestions is greately appreciated.
Hi Guys, I am having a query which would result as below, The above shows count by xyz for the user selected timerange.  I would like to add one more column to this table as LessThanThreshold -... See more...
Hi Guys, I am having a query which would result as below, The above shows count by xyz for the user selected timerange.  I would like to add one more column to this table as LessThanThreshold - which would tell the number of times the count in each day was below the corresponding Threshold value. To be precise for a row, if the value of 01-Mar-22 < Threshold then increment the new column LessThanThreshold by 1, if 28-Feb-22<Threshold, then increment LessThanThreshold by 1.  Using Foreach I am not sure to compare between columns itself. Could someone please help me out here. Thanks
 I am performing theSplunk query on following result, The following field repeats 100 times with different values randomstring=randomstring&firstRex=firstRexValue&anotherradomstring=antotherrandoms... See more...
 I am performing theSplunk query on following result, The following field repeats 100 times with different values randomstring=randomstring&firstRex=firstRexValue&anotherradomstring=antotherrandomstring&secondRex=secondrexvalue&somotherstuff=someotherstuffvalue&yetanotherstuff=yetanotherstuffvalue&thirdRex=thirdrexvalue the Splunk query is as below.       source="source" searchquery | rex "firstRex=(?<value1>[^&]+)" | rex "secondRex=(?<value2>[^&]+)" | rex "thirdRex=(?<value3>[^&]+)" | transaction value1 | table value2 value3         Now when I do table, the value2 and value3 doesn't seems connected.  I mean the column value2 has 5 rows while column value3 has 7 rows for example. Further, I would also like to add date for each event in the table, how can I do it? and I would need your suggestion to perform regex in single rex query instead of three
Good Morning, I am attempting to use a TimeChart that will show me the ratio of my GET/POST HTTP requests within the span of 1 hour. However, the output in my TimeChart only displays the latest re... See more...
Good Morning, I am attempting to use a TimeChart that will show me the ratio of my GET/POST HTTP requests within the span of 1 hour. However, the output in my TimeChart only displays the latest result, regardless of what time (I.E if the current ratio output is .75 as of 9:00pm, it will display as .75 for 8:00pm, even though it was .50 at 8:00pm). Here is current search query: index=nsm source="/nsm/zeek/logs/current/*http*" | eventstats count(eval(method="GET")) as GET, count(eval(method="POST")) as POST | eval Ratio=round(GET/POST, 2) | timechart span=1h values(Ratio) I've attempted many different things, including Time Modifiers, but so far no luck. This is the closest to get it to where I want, but it will not accurately display the Ratio of the previous time. Is here anyway around this?
I have created a table that looks as follows: The colums are variable as they depend on the selected time frame. I want to apply a conditional format on each cell in the table based on the first... See more...
I have created a table that looks as follows: The colums are variable as they depend on the selected time frame. I want to apply a conditional format on each cell in the table based on the first numeric value in each cell. The cell should be colored red if the numeric value is lower than 400. Applying the following colorPalette expression doesn't seems to work: <format type="color"> <colorPalette type="expression">if(tonumber(mvindex(split(value," "),0)) &lt; 400,"#FF5733",null)</colorPalette> </format>  While the following does: <format type="color"> <colorPalette type="expression">if((substr(value,1,1)="1" OR substr(value,1,1)="2" OR substr(value,1,1)="3" OR substr(value,1,1)="0") AND substr(value,4,1)=" ","#FF5733",if(substr(value,3,1)=" ","#FF5733", null))</colorPalette> </format> However, the latter expression doesn't color cells with a numeric value of 0. In addition, it looks sloppy and is difficult to understand for my colleagues. Can someone explain why the first expression doesn't work and/or provide a solution? Thank you.
Background In my system, every visit consist of one or more transactions and every has its global serial number, which is unique(gsn for short). A transaction may produce many rows of logs but it ha... See more...
Background In my system, every visit consist of one or more transactions and every has its global serial number, which is unique(gsn for short). A transaction may produce many rows of logs but it has the same gsn. A transaction always ends with "trans end transName", while the "transName" means the name of the transaction, a transaction named Test ends with "trans end Test", for example. Every transaction's name is unique. Questions Now I have a transaction named A, which in some case will do something specially and log "special". But other transactions will log "special" too. And most 1 "special" and 1 "trans end A" per gsn. How can I get the rate of transaction A that goes this way by just one command or, some fast ways than subsearch? Tried Below is what I've tried. The subsearch runs very slowly, which takes 5 min at least. If there's no one-command way, I want to get a  way faster than subsearch.     //get the count of transaction A "trans end A" | stats count //get the count of transaction A that runs specially join type=inner gsn [search "trans end A"] | regex "special" | stats count      
I'm new with splunk, I installed app ms windows ad object but in order to fix the shared points: First: Add an automatic lookup for source XMLWinEventLog:Security using the AD_Audit_Change_EventCode... See more...
I'm new with splunk, I installed app ms windows ad object but in order to fix the shared points: First: Add an automatic lookup for source XMLWinEventLog:Security using the AD_Audit_Change_EventCodes lookup. In the MS Windows AD Objects app, navigate to Settings - - > Lookups - - > Automatic Lookups. Click New Automatic Lookup Enter the following: Name: ms_ad_obj_wrkaround_msad_action Source: XmlWinEventLog:Security Lookup Input Fields: EventCode = EventCode obj_type = obj_type Lookup output Fields: change_action = change_action Click Save Set the permissions to the app and role permissions   I did what is asked but I still get the message: Could not load lookup=LOOKUP-ms_ad_obj_wrkaround_msad_action with a failure for some functionalities of the application
Je suis nouveau avec splunk, j’ai installé l’application ms windows ad object mais afin de corriger les points partagés: Tout d’abord: Ajouter une recherche automatique pour la source XMLWinEventLog... See more...
Je suis nouveau avec splunk, j’ai installé l’application ms windows ad object mais afin de corriger les points partagés: Tout d’abord: Ajouter une recherche automatique pour la source XMLWinEventLog:Security en utilisant la recherche AD_Audit_Change_EventCodes. Dans l’application Objets MS Windows AD, accédez à Paramètres - - > Recherches - - > Recherches automatiques. Cliquez sur Nouvelle recherche automatiqueEntrez ce qui suit :   Nom : ms_ad_obj_wrkaround_msad_action Source : XmlWinEventLog:SecurityLookup Input Fields: EventCode = EventCode obj_type = obj_type Champs de sortie de recherche : change_action = change_action Cliquez sur EnregistrerRégez   les autorisations sur l’application et les autorisations de rôle J’ai fait ce qui est demandé mais je reçois toujours le message: Impossible de charger la recherche=LOOKUP-ms_ad_obj_wrkaround_msad_action avec un échec pour certaines fonctionnalités de l’application