All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a query which shows the details of Users and VPN host which they are connected.For suppose if a user has connected to vpn_bom in the 24 hours I don't want to see his details in the results.I w... See more...
I have a query which shows the details of Users and VPN host which they are connected.For suppose if a user has connected to vpn_bom in the 24 hours I don't want to see his details in the results.I want to display the results of all the users who haven't connected to vpn_bom in the last 24hrs at least once. Thank you for the help as always. These are the results im getting when i execute the above query , but I don't want to display chrispar details as he has connected to vpn_bom at least once. I want only those people who have not connected to vpn_bom and connected to other vpns(sbala,jeffp in thi case) Results: User User_Country Target_VPN chirspar India vpn_dub chirspar India vpn_bom chirspar India vpn_sin sbala India vpn_sin sbala India vpn_phx jeffp India vpn_fra jeffp India vpn_ash   Query: index=vpn  Cisco_ASA_message_id=722051 OR Cisco_ASA_message_id=113019 NOT "AnyConnect-Parent" | transaction user endswith="Duration:" keepevicted=true | eval full_duration = duration_hour."h".duration_minute."m".duration_second."s" | eval bytesMB=round(((bytes/1024)/1024),2), bytes_inMB=round(((bytes_in/1024)/1024),2), bytes_outMB=round(((bytes_out/1024)/1024),2) | eval Start_time=strftime(_time,"%Y/%m/%d %H:%M:%S"), End_time=(strftime(_time + duration,"%Y/%m/%d %H:%M:%S")), Total_time=if(isnull(full_duration), Start_time." --> current session",Start_time." --> ".End_time) | mvexpand src | iplocation src | eval LocationIP=City.", ".Country | stats values(host) as vpn_host values(Total_time) as "Session Time" values(src) as "PublicIP" values(LocationIP) as LocationIP values(assigned_ip) as "Assigned IP" values(reason) as "Termination Reason" values(bytesMB) as bytesMB values(bytes_inMB) as bytes_inMB values(bytes_outMB) as bytes_outMB values(full_duration) as Duration by _time, user|rename LocationIP as User_Location |eval temp=split(User_Location,",") | eval User_Country=mvindex(temp,1)| fields - temp |rename user as User vpn_host as Target_VPN| table User User_Country Target_VPN |search User_Country=*India*        
Hello all,  So, I am having the following information forwarded to splunk as sourcetype as below (with more than 15000 similar lines):     2021-Jan-14 09:07 2 servername2 instance1 2021-Jan-14 09... See more...
Hello all,  So, I am having the following information forwarded to splunk as sourcetype as below (with more than 15000 similar lines):     2021-Jan-14 09:07 2 servername2 instance1 2021-Jan-14 09:07:25.393 [transaction_string1] 79897 67163 OK 1 [269661] 97 28 OK     I don't have any kind of header of this text file that is forwarded to splunk but I do know how to create one using the Fields options - that won't be an issue.   I need to create a report that has the following specs: 1. Rows: "Scored" -  a rangemap for the value which is represented in the text file as 97 (after [269661]) range map should be: 0s-to-0.05s=1-50 0.05s-to-0.10s=51-100 0.10s-to-0.15s=101-150 0.15s-to-0.20s=151-200 0.20s-to-0.30s=201-300 0.30s-to-0.50s=301-500 0.50s-to-1s=501-1000 1s-to-2s=1001-2000 2s-to-3s=2001-3000 3s-to-5s=3001-5000 5s-to-30s=5001-30000 >30s=30001-99999 2. Columns:  - All: a sum(count) for each range present - if there are no records for a specific range, then 0 should be shown as a total.  - servername (alphabetically sorted) with instanceId (there are 2: 1 and 2 for each servername) - each one getting the count value for each range value in "Scored" mentioned above ---- if there are is 0 as count for a specific range on the servername and instanceid, then 0 should be shown for each servername and instanceid.  Now, by the looks of it, this can be achieved using a pivot.  So far, this is what I could've come up with:  The output I need should be similar to one below: Can anyone help me out on how to build up a search query to actually have the desired output? Thanks!
Hello everyone, I have multiple fields and i want to extract an ID from it. (That's the only value that changes in it) My fields are : class, method, message, nb. Message field is like this : "] i... See more...
Hello everyone, I have multiple fields and i want to extract an ID from it. (That's the only value that changes in it) My fields are : class, method, message, nb. Message field is like this : "] id not found for opp : [12345azeAZE" I wanted to extract the value after the "[" (in bold) and create a message with it. Can i have a result like this please : There are $nb$ errors from the method: $method$ with the class: $class$ on the opp: $message$. Thanks.
Hi there is there any option how I can transfer set tokens from dashboard A to dashboard B by using drilldown? My aim is to set tokens in dashboard A and  by  click on a panel which is linked by dr... See more...
Hi there is there any option how I can transfer set tokens from dashboard A to dashboard B by using drilldown? My aim is to set tokens in dashboard A and  by  click on a panel which is linked by drilldown to dashboard B the tokens in dashboard B are the same I set before in dashboard A. Is this possible with the drilldown option or not? Because until now I only get the value of the panel transferred into dashboard B
Hello All, when I send some log to Splunk, I am not getting the events as per the log order. For example, my first line in the log is 7th or 10th line in Splunk, 2nd line would be 20th and 3rd might... See more...
Hello All, when I send some log to Splunk, I am not getting the events as per the log order. For example, my first line in the log is 7th or 10th line in Splunk, 2nd line would be 20th and 3rd might me 1st line in Splunk events. In my data, I need to check whether the file processed successfully or not, any error occurred or not. If processed successfully, how many rows updated. These three things will be in different lines or events. If data is not in correct sorted order, I could not relate one with other.  Can you please help me to get the data according to the log order. Really appreciate the suggestion!! Thanks in advance!!
Hi all!  I am relatively new to splunk and I am trying to use the results of one search for another search, So... index=index1 <conditions> or index=index2<conditions> | stats count by src ... See more...
Hi all!  I am relatively new to splunk and I am trying to use the results of one search for another search, So... index=index1 <conditions> or index=index2<conditions> | stats count by src servname |fields src |rename src as ip  Results:  ip 1.1.1.1  2.2.2.2 3.3.3.3  4.4.4.4 in index3, the field is called ip,  I would like to based off the returned ip list above ^: index=index3  ip="1.1.1.1" or ip="2.2.2.2" or ip="3.3.3.3" or ip="4.4.4.4"  |stats count description by ip But I cant seem to do it, when I make use of format or subsearches like  index=index3 [ search (index=index1 or index=index2 ...  ] | stats count description by ip it seems to return me results of all ips and their description in just index3. The first subsearch results "1.1.1.1" "2.2.2.2" "3.3.3.3" etc does not get parsed into the index3 search as a variable. How can i make this happen?  *Pardon my explanation if its too lengthy  
Hello Splunk Community,  I have encountered a easy, yet tricky situation. I was told chart command works just like stats, so far I have all the data I need. However, the issue I have is with one fie... See more...
Hello Splunk Community,  I have encountered a easy, yet tricky situation. I was told chart command works just like stats, so far I have all the data I need. However, the issue I have is with one field. I need to populate the setting field for the data. I get the setting field to populate, but it shows up 3 different times for each drive - I only need it to show one time.    index="123" sourcekind="Disk Space" count="% Space" dev_team=XYZ drive="C:" OR drive="E:" OR drive="F:" | chart values(setting) as setting sparkline(avg(NET),15m) as NET_value, latest(NET) as Free over server by drive     Data I receive: server setting:C: setting:E: setting:F Free:C: Free:E: Free:F: Net_Value:C: Net_Value:E: Net_Value:F: 123456 Testing Testing Tesing 23.00 24.00 34.00 \ \ \   Desired results: server setting Free:C: Free:E Free:F Net_Value:C: NetValue:E: Net_Value:F 123456 Testing 23:00 24:00 34.00 \ \ \
Hello All!  I'm trying to figure out how to stop an active playbook from auto running when an artifact is added to a case via the GUI.  I can't seem to find any documentation or option to turn this f... See more...
Hello All!  I'm trying to figure out how to stop an active playbook from auto running when an artifact is added to a case via the GUI.  I can't seem to find any documentation or option to turn this functionality off.  Is there a setting for this?  Or do I need to add logic to my playbook so it cancels itself if it has already been run on the current container?
Hello. I've got a problem with timestamp extraction. I can get it working on V8.0+ Splunk, but it fails on Splunk V7.2. I'll explain my set up then the problem. Configuration inputs.conf There ar... See more...
Hello. I've got a problem with timestamp extraction. I can get it working on V8.0+ Splunk, but it fails on Splunk V7.2. I'll explain my set up then the problem. Configuration inputs.conf There are multiple source types from a single source. We set sourcetype to changeme then override it later with a transform. [tcp://20000] index=product_analytics sourcetype=changeme connection_host=none host=change_me source=Single_Source props.conf Anything from the above source has three transforms applied to it. Note that each source type has a different TIME_FORMAT requirement. We set a default against the source, then override the parameters in the source type. [source::Single_Source] SHOULD_LINEMERGE = false LINE_BREAKER = (\r\r) TRANSFORMS-Single_Source= json_override_sourcetype,json_override_host,json_strip_indexing_data TIME_PREFIX = ("|&lt;)t("|&gt;):* [Source_Product_One] KV_MODE = json TIME_FORMAT = %s%3N [Source_Product_Two] KV_MODE = xml transforms.conf The source type is defined in the incoming JSON data by an "st" variable. We know the transforms work as the indexed data has the source type set to the expected value (same for host set by the second transform and the stripping of data by the third transform).  [json_override_sourcetype]  DEST_KEY=MetaData:Sourcetype  REGEX = "st":"([^"]*)  FORMAT = sourcetype::$1  [json_override_host]  DEST_KEY=MetaData:Host  REGEX = "h":"([^"]*)  FORMAT = host::$1  [json_strip_indexing_data]  DEST_KEY=_raw  REGEX = ^.*"h":"[^"]*",(.*)$  FORMAT = {$1 Problem On Splunk 7.2 any events with "Source_Product_One"  source type fail to have their timestamps correctly extracted (it uses the received event time, not the contents of the "t" field in the incoming data). Events with a source type of "Source_Product_Two" are correctly extracted . On Splunk 8.0 this works. Events with either "Source_Product_One" or "Source_Product_Two" source types have the timestamps correctly extracted.  On Splunk 7.2, if I move the TIME_FORMAT parameters to the source stanza - out from the source type stanzas - then timestamp extraction works, but this breaks the time extraction for all other source types. See this example: [source::Single_Source] SHOULD_LINEMERGE = false LINE_BREAKER = (\r\r) TRANSFORMS-Single_Source= json_override_sourcetype,json_override_host,json_strip_indexing_data TIME_FORMAT = %s%3N TIME_PREFIX = ("|&lt;)t("|&gt;):* [Source_Product_One] KV_MODE = json [Source_Product_Two] KV_MODE = xml This works - Source_Product_One source types now have timestamps extracted, but Source_Product_Two source type events no longer have timestamps extracted. I've also tried this - and it didn't work either - timestamp extraction continued to fail. [source::Single_Source] SHOULD_LINEMERGE = false LINE_BREAKER = (\r\r) TRANSFORMS-Single_Source= json_override_sourcetype,json_override_host,json_strip_indexing_data TIME_PREFIX = ("|&lt;)t("|&gt;):* [Source_Product_One] KV_MODE = json TIME_FORMAT = %s%3N [Source_Product_Two] TIME_FORMAT = %a %b %d %H:%M:%S %Z%:z %Y Each source type has a different TIME_FORMAT requirement - so I'd like to fix it so both are correctly applied. Any suggestion on how to do this please? Or advice on debugging. Thanks for your time.            
I've Googled it, but can't find a SOLUTION.   I've got a search that pulls Validators remaining per Subject.  I want to pass that list of validators (single variable) and Subject (and maybe status) ... See more...
I've Googled it, but can't find a SOLUTION.   I've got a search that pulls Validators remaining per Subject.  I want to pass that list of validators (single variable) and Subject (and maybe status) variable to the parent search to be used for that search.  So the final table be the parent search with the correlated subsearch data (list of validators).  The two searches are of different indexes, but the Subjects are the same. Any ideas or links to solutions that I obviously missed?
I have been using the AWS Add-On to pull data from S3 into Splunk.  It's been working well for internal use.  I now have been tasked to provide data to an external source.  The external source should... See more...
I have been using the AWS Add-On to pull data from S3 into Splunk.  It's been working well for internal use.  I now have been tasked to provide data to an external source.  The external source should only have access to the data that they need.   The S3 looks similar to this. bucketname dataset1/dashboard dataset2/dashboard <--- External source should only access this data dataset3/dashboard I created AWS keys specifically for this external source.  I want to set my permissions so the external source only has access to dataset2 without viewing data from dataset1 and dataset3. The problem is Splunk requires the ListBucket permission to index the data.  I have attempted to use a condition statement to lock this down to the specific folder.  However, when I use the permissions below, Splunk cannot grab the data.  If I remove the condition statement the data is processed into Splunk but I can now see the contents of dataset1 and dataset3.    { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBuckets" ], "Resource": [ "arn:aws:s3:::bucketname" ], "Condition": { "StringLike": { "s3:prefix": [ "dataset2/", "dataset2/dashboard/*" ] } } }, { "Sid": "AllowUsersToAccessFolder2Only", "Effect": "Allow", "Action": [ "s3:GetObject*", "s3:PutObject*" ], "Resource": [ "arn:aws:s3:::bucketname/dataset2/dashboard/*" ] } ] }    My solution for now is to create another bucket specifically for this external source and reroute the data to that bucket.  I want to avoid this if possible.
i have a date field like this 2021-01-29 00:25:58.913024+00 i want to convert this just date as days field using now() when i run the command  Thanks in advance 
I have a project that I am working on that will display when a user logs onto a server and logs out then calculates the duration of the two giving the session time. I have all of the events for both ... See more...
I have a project that I am working on that will display when a user logs onto a server and logs out then calculates the duration of the two giving the session time. I have all of the events for both "log on" and "log off".    Currently I have a table that shows Host Account_Name Group Session Start Session End Duration fdk-DC01 jim.smith logon 1611665560     fdk-DC01 jim.smith logoff 1611774585                   Because each event is one entry, both logon and logoff falls in the "session start" column. Is there a way to have a row get created if all "Host, Account_Name, Group and Time" are in the event and just append the latest logoff time to the entry that matches the same "Host, Account_Name, Group".   So in other words the table would show on Tuesday, January 26, 2021 12:52:40 PM Host Account_Name Group Session Start Session End Duration fdk-DC01 jim.smith logon 1611665560       And once jim.smith logs off on fdk-DC01 then it will add "_time" to Session End on Wednesday, January 27, 2021 7:09:45 PM Host Account_Name Group Session Start Session End Duration fdk-DC01 jim.smith logon 1611665560 1611774585 11.71 hr   Any pointers would be appreciated!
1st search works (I get all fields in my table including GUID):   earliest=-1y index=azuread sourcetype="ms:aad:audit" category=DirectoryManagement (activityDisplayName="CreateTrustFrameworkPolicy"... See more...
1st search works (I get all fields in my table including GUID):   earliest=-1y index=azuread sourcetype="ms:aad:audit" category=DirectoryManagement (activityDisplayName="CreateTrustFrameworkPolicy" OR activityDisplayName="Add unverified domain" OR activityDisplayName="Add verified domain" OR activityDisplayName="Set federation settings on domain" OR activityDisplayName="Get tenant details" OR activityDisplayName="Initialize tenant" OR activityDisplayName="Create company" OR activityDisplayName="Create program") | fillnull value=”N/A” | rex field=initiatedBy.user.userPrincipalName "ex(?<GUID>\d+)z\@" | table activityDateTime, activityDisplayName, operationType, targetResources{}.displayName, targetResources{}.modifiedProperties{}.displayName, targetResources{}.modifiedProperties{}.oldValue, targetResources{}.modifiedProperties{}.newValue, initiatedBy.user.userPrincipalName, GUID   2nd search works (I get cn from map command by itself):   earliest=-1y index=azuread sourcetype="ms:aad:audit" category=DirectoryManagement (activityDisplayName="CreateTrustFrameworkPolicy" OR activityDisplayName="Add unverified domain" OR activityDisplayName="Add verified domain" OR activityDisplayName="Set federation settings on domain" OR activityDisplayName="Get tenant details" OR activityDisplayName="Initialize tenant" OR activityDisplayName="Create company" OR activityDisplayName="Create program") | rex field=initiatedBy.user.userPrincipalName "ex(?<GUID>\d+)z\@" | fillnull value=”N/A” | map search="ldapsearch domain=DEFAULT search=\"(&(objectClass=user)(qcguid=$GUID$))\" attrs=cn" | table cn   3rd search combining the two searches blanks out my table but properly shows cn field obtained from map:   earliest=-1y index=azuread sourcetype="ms:aad:audit" category=DirectoryManagement (activityDisplayName="CreateTrustFrameworkPolicy" OR activityDisplayName="Add unverified domain" OR activityDisplayName="Add verified domain" OR activityDisplayName="Set federation settings on domain" OR activityDisplayName="Get tenant details" OR activityDisplayName="Initialize tenant" OR activityDisplayName="Create company" OR activityDisplayName="Create program") | rex field=initiatedBy.user.userPrincipalName "ex(?<GUID>\d+)z\@" | fillnull value=”N/A” | map search="ldapsearch domain=DEFAULT search=\"(&(objectClass=user)(qcguid=$GUID$))\" attrs=cn" | table activityDateTime, activityDisplayName, operationType, targetResources{}.displayName, targetResources{}.modifiedProperties{}.displayName, targetResources{}.modifiedProperties{}.oldValue, targetResources{}.modifiedProperties{}.newValue, initiatedBy.user.userPrincipalName, GUID,cn   How do fix this? Append, appendcols, join? Any idea? Thanks!
Query example:       index=eks sourcetype="kube:container" message=log | fields data.user_agent | rex field=data.user_agent mode=sed "s/[0-9]//g" | rex field=data.user_agent mode=sed "s/\.//g" | ... See more...
Query example:       index=eks sourcetype="kube:container" message=log | fields data.user_agent | rex field=data.user_agent mode=sed "s/[0-9]//g" | rex field=data.user_agent mode=sed "s/\.//g" | eval agent = data.user_agent | table data.user_agent, agent       After this query `agent` column is empty. While data.user_agent is filled with data. Was expecting to have a text copy. Also if will add some logic, based on data.user_agent it will not work for reason as well:       index=eks-prod sourcetype="kube:container:api-auth" message=web_login | fields data.user_agent | rex field=data.user_agent mode=sed "s/[0-9]//g" | rex field=data.user_agent mode=sed "s/\.//g" | eval agent = if(like(data.user_agent, "Mozilla%"), "browser", "device") | stats count by agent       this will produce result with always "device", never "browser"
Have a small lookup table with 135 dest_ip and a search that is  searching that lookup table against a 40 TB  index ( for a 6 month period for those IP's)  When i run this search ( or add Ip to the l... See more...
Have a small lookup table with 135 dest_ip and a search that is  searching that lookup table against a 40 TB  index ( for a 6 month period for those IP's)  When i run this search ( or add Ip to the lookup table, or even just search 1 or 2 single ips by themselves) against this 40tb index for a specific time period longer than a month, the search takes hours and i mean hours My questions is without a datamodel, how can i speed this search up? I tried tstats but that doens'nt work unless you have datamodel ( at least i could not get it to work), tried TERM, could not get that to work. Any ideas? here is current search im using index=myindex src_ip=* | lookup mylookup.csv dest_ip OUTPUT dest_ip | dedup src_ip, dest_ip | table src_ip, dest_ip | sort src_ip the above search works great for alert i run every 15 minutes so see if anyone hits these ip's in the lookup, but for searching a large index, it takes forever. Any help in speeding up a search like this would be appreciated   thank you  
Hi, I have a field called datetime - example is datetime=Wed Feb 03 17:56:37 UTC 2021 I essentially want to convert this so I can evaluate the difference between the event timestamp with this datet... See more...
Hi, I have a field called datetime - example is datetime=Wed Feb 03 17:56:37 UTC 2021 I essentially want to convert this so I can evaluate the difference between the event timestamp with this datetime field. I have already converted the event timestamp using: | eval input=strptime(_time, "%m/%d/%Y %I:%M %p") ---> this would show me 2021-02-03 18:07:42.958 How do I convert the datetime field so it's in the same format and how to show the difference between these fields?
Hi Splunkers, Anyone using Tenable Scan app or add on?  I heard that monitoring can be done directly in the Tenable.  But I'm trying to leverage Splunk in my organization. Does it make any differenc... See more...
Hi Splunkers, Anyone using Tenable Scan app or add on?  I heard that monitoring can be done directly in the Tenable.  But I'm trying to leverage Splunk in my organization. Does it make any difference or bring value to monitor tenable data in Splunk? Please Shed some light into this topic. Thanks in Advance.
I have a workflow action that works correctly when doing a normal search. I added a panel in my dashboard so users can choose the workflow action to research issues further. At this time though the w... See more...
I have a workflow action that works correctly when doing a normal search. I added a panel in my dashboard so users can choose the workflow action to research issues further. At this time though the workflow action is not available in the dashboard. If I open that same search in a new window; I see the workflow action available. Is there a limitation on workflow actions applying to dashboards?