All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

So I have two different services where an API call starts from service A and propagates to service B. I want to trace the errors for this and creating a dashboard to show the consolidated errors. Lo... See more...
So I have two different services where an API call starts from service A and propagates to service B. I want to trace the errors for this and creating a dashboard to show the consolidated errors. Logs are as follows.  Service A logs: 10.0.9.456 - - 23/Mar/2021:17:29:52 +0000 "POST Error occured in service A status 400 bad request referenceid 1615 msg Some bad request error occured in application B status 400 url /test/user/myuserfield/authorize?service=myservicename&serviceT=myserviceTypeid Service B logs:     { "userId": "/myuserfield", "transactionId": "abcd", "timestamp": "2021-03-24T15:41:25.770Z", "eventName": "myevent", "component": "mycomponent", "response": { "statusCode": "400", "detail": { "reason": "Bad Request" } }, "http": { "request": { "method": "POST", "path": "http://dummyurl", "queryParameters": { "serviceId": [ "myservicename" ], "serviceType": [ "myservicetype" ] } } } }       So the mapping of fields between these two service is as follows: Service A Field Service B Field status statusCode service serviceId serviceType serviceT user userId    I have tried to use subsearch extract fields from service A :   index=*serviceB* | spath | rename userId as user, http.request.queryParameters.serviceId{} as service, http.request.queryParameters.serviceType{} as serviceT | search [search index=*serviceA* | rex "/test/user(?<user>/\w+)+/authorize.*\?+service+\=(?<service>\w+)+\&+serviceT\=(?<serviceT>.*)\"" | dedup user service serviceT | fields user service serviceT] Above expression provides me the logs of Service B which are propagated from service A. What i want now is to display the data in Tabular format for better readability. So i have two questions: 1. Am i going with the right approach by using subsearch here ? Is the expression seems to be correct and best possible solution? 2. Above expression provides me different error logs for different users in json format. How do i convert these to tabular format having userId, time, status etc. ? I also want to filter my table based on multiple status filters (like 4XX, 5XX etc)..how to achieve that?
Hi, I'm trying to sort a value on a table from a rex field in Splunk Search.  For instance, I have below value: Date Host Count Wed_Mar_03/12/2021_12:30:01_EDT mn4.cioprd.lc 4295 Wed_M... See more...
Hi, I'm trying to sort a value on a table from a rex field in Splunk Search.  For instance, I have below value: Date Host Count Wed_Mar_03/12/2021_12:30:01_EDT mn4.cioprd.lc 4295 Wed_Mar_03/12/2021_12:40:01_EDT mn3.ciodev.lc 2182 Wed_Mar_03/12/2021_12:30:01_EDT hive3.CIOPRD.LC 1273 Wed_Mar_03/12/2021_12:30:01_EDT hive2.cioprd.lc 1202 Wed_Mar_03/12/2021_12:40:01_EDT mn4.ciodev.lc 1118   I would like to sort this by Host starting with ".cioprd.local".  The table should look like this. Date Host Count Wed_Mar_03/12/2021_12:30:01_EDT mn4.cioprd.lc 4295 Wed_Mar_03/12/2021_12:30:01_EDT hive2.cioprd.lc 1202 Wed_Mar_03/12/2021_12:30:01_EDT hive3.CIOPRD.LC 1273 Wed_Mar_03/12/2021_12:40:01_EDT mn3.ciodev.lc 2182 Wed_Mar_03/12/2021_12:40:01_EDT mn4.ciodev.lc 1118   I tried the using the eval from this doc, but no luck.  Can you please help me on this?  Thanks.
index=_audit action=alert_fired ss_app="Threats_App" | eval ttl=expiration-now() | search ttl>0 | convert ctime(trigger_time) | sort - trigger_time | table trigger_time ss_name severity | renam... See more...
index=_audit action=alert_fired ss_app="Threats_App" | eval ttl=expiration-now() | search ttl>0 | convert ctime(trigger_time) | sort - trigger_time | table trigger_time ss_name severity | rename trigger_time as "Alert Time" ss_name as "Alert Name" severity as "Severity" I created a dashboard, panel with above query in it. It is looking for triggered alerts from my app. I want to display the results(stats) of the triggered alerts in a different panel below that in the same dashboard.  so its like " here are the alerts fired and when u click the alert name, it shows the stats(results) of that alert. Implementing this , I can see multiple alerts and the results of those alerts in the same dashboard"  I do not want to install additional apps, so please help me with this query only. Please do not suggest apps for a simple solution. Thanks     
I am trying to do analysis on a historical/intermittent issue that is surround a particular error in our logs. This error occurs multiple times usually in large bursts in a 1m-3h window. I am trying... See more...
I am trying to do analysis on a historical/intermittent issue that is surround a particular error in our logs. This error occurs multiple times usually in large bursts in a 1m-3h window. I am trying to determine how best to find the start time and end time of a "error event" based on how far apart the events are calculated. I am currently using the following search. <Events in same time frame as subsearch> | [ search host=<server> "Error Message" | transaction sourcetype maxpause=5m maxevents=-1 | table _time,duration | eval earliest=_time-120 | eval latest=_time+duration+120 | fields earliest latest | FORMAT "(" "(" "" ")" "OR" ")" ] When looking at the subsearch, this works for when the errors contained in an event window have a low enough line count. there are a few large events that when run with maxevents=-1 appear to corrupt the search results and are unable to extract and display the startime and duration of the events. These large transaction events generated have line counts in the range of 82,000 or higher.  Is there a way to gather the start and end time of a group of events without creating such a large transaction that splunk is unable to display correctly? I am also trying to use a portion of the subsearch on its own to create a table displaying the start time and duration of each event, the large transactions also cause this table command to be unable to complete. search host=<server> "Error Message" | transaction sourcetype maxpause=5m maxevents=-1 | table _time,duration Thank you for your help!
Hello,   With Appendcols I now have both values in one line. However, I would like to compare the values with each other. As an example: "mysearch " stats dc(User) as User1 | appendcols [search ... See more...
Hello,   With Appendcols I now have both values in one line. However, I would like to compare the values with each other. As an example: "mysearch " stats dc(User) as User1 | appendcols [search "my2search" | stats dc(User) as User2 ] Now as result I get User1 User2 500     1000 Now I would like to compare the two values in the same query, for example multiply User1 with User2 or similar. How can I include this in the search?  
Hi, I am new to Splunk and looking for some help. I am trying to merge my lookup file data with search results and rearrange them for better usage My search results are User      Accessed applicat... See more...
Hi, I am new to Splunk and looking for some help. I am trying to merge my lookup file data with search results and rearrange them for better usage My search results are User      Accessed application                                   Count A            Prog 1                                                            10 A            Prog 2                                                            6 A            Prog 3                                                            8 B            Prog 2                                                            4 B            Prog 4                                                            6   And my lookup file data is Accessed application      Auth_object                                   Prog 1                               Auth object 1                                                                             Prog 1                               Auth object 2                                                                            Prog 1                               Auth object 3                                                                            Prog 1                                common_auth_object Prog 2                               Auth object 1                                                                             Prog 2                               Auth object 4                                                                            Prog 2                                common_auth_object Prog 3                                Auth object 1   And my expected results are User      Accessed application      Auth_object              part_of_common_auth_object                Count A            Prog 1                               Auth object 1                                  x                                         10 A            Prog 1                               Auth object 2                                 x                                         10 A            Prog 1                               Auth object 3                                 x                                         10 A            Prog 3                                Auth object 1                                                                             8   instead of showing a seperate line for program that is part of common auth object, I want to show as X next to other auth object to indicate that the program is also part of common object.   Thanks Vijay  
Hello, I am trying to setup a report which will list all user activities in the F: drive. PFB my inputs.conf : [WinEventLog://Security] disabled = 0 index = fgfdstore start_from = newest curr... See more...
Hello, I am trying to setup a report which will list all user activities in the F: drive. PFB my inputs.conf : [WinEventLog://Security] disabled = 0 index = fgfdstore start_from = newest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 15 whitelist1 = 4663,4656 renderXml = false I have also prepared the below search query : index=<indexname> Object_Name="F:*" NOT *.*tmp | eval folder = mvindex(split(lower(Object_Name),"\\"),3) | table _time, Account_Name, folder, Object_Name, Accesses | rename Object_Name as "File Path", Account_Name as UserName | dedup UserName, "File Path", Accesses | sort -_time   With this setup I am able to track activities like delete, modify, READ_CONTROL and create. However, I am still not getting records when my colleague opened a file in F: drive as a test run. Also, I am not able to understand how I can tell if a file is being copied from F: drive without opening it. My question is,  1. How can i track if a file is read but not modified ? 2. How can i tell if a file is copied without ever opening it ? I am new to Splunk and my questions may appear naïve and simple. Any help, guidance and suggestion is highly appreciated.
Hi Splunkers, I have the below logs and trying to create an alert if a process run is taking more than the expected time. 2021-03-24T14:00:14.8 STATUS=Successful,ACTIVITY AT=2021-03-24T14:00:14,ACT... See more...
Hi Splunkers, I have the below logs and trying to create an alert if a process run is taking more than the expected time. 2021-03-24T14:00:14.8 STATUS=Successful,ACTIVITY AT=2021-03-24T14:00:14,ACTION TYPE=Process started 2021-03-24T14:05:21.54 STATUS=Successful,ACTIVITY AT=2021-03-24T14:05:21,TYPE=Process finished Im using the below query to track the same but it is triggering an alert even if the process is completed well within the limit index="abc" TYPE="Process started" | eval last_seen=_time | eval mins_since = round((now() - last_seen) / (60)) |table mins_since |search mins_since>10 From the above logs the alert shouldn't trigger since process has finished in 5mins but i'm getting false positives.The alert should trigger only if Process is not finished and mins_since>10. I tried search TYPE!="Process finished" AND mins_since>10 but still not getting desired results.Please help me in this scenario.Thanks.    
Hi, I tried to deploy the credentials for splunk cloud through the deployment server. Unfortunately I have a certificate error when launching the universal forwarder instances. Do you have a workaro... See more...
Hi, I tried to deploy the credentials for splunk cloud through the deployment server. Unfortunately I have a certificate error when launching the universal forwarder instances. Do you have a workaround for this problem? Thanks for your help
Does anyone have a query that lists UF hosts by version and serverclass? I need a report that provides  host= <foo>  Splunk Version = <version num> ServerClass = <bar> Thank you
Can any one help?   I am trying to configure a KV Store lookup, I have followed the online documentation: https://docs.splunk.com/Documentation/Splunk/8.1.3/Knowledge/ConfigureKVstorelookups ... See more...
Can any one help?   I am trying to configure a KV Store lookup, I have followed the online documentation: https://docs.splunk.com/Documentation/Splunk/8.1.3/Knowledge/ConfigureKVstorelookups and https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usingconfigurationfiles/   However when I try and populate any data into the lookup with an outputlookup command I get: `Error in 'outputlookup' command:Lookup failed because collection 'asset_enrichment' in app 'search does not exist or user does not have read access. I have a simple collections.conf [asset_enrichment] and my transforms.conf [asset_enrichment] external_type = kvstore case_sensitive_match = false collection = asset_enrichment fields_list = _key, asset_enrichment guid, ........ etc Both deployed to my SHC. My assumption is that they are only required on the SH and not the index cluster due to the KV store living on the SH?   I have created a Lookup definitition    But if I look in Settings - Lookups - Lookup table files my look up is not listed at all.   Any suggestions?
Hi everyone, I'm having a strange problem with a Windows event collector. I installed UF on the WEC ( Windows Server Core version) that sends forwarded security events to Splunk. My problem is basica... See more...
Hi everyone, I'm having a strange problem with a Windows event collector. I installed UF on the WEC ( Windows Server Core version) that sends forwarded security events to Splunk. My problem is basically that every single event in Splunk is only 12 lines long, but when I review the event there is a lot more information that should be sent to Splunk for example the description... I configured the same but with a WEC with GUI and there I get all the lines from the event. Has anyone ever had the same problem?
Hi team, I am trying to send earliest and latest time values from lookup to saved search but i am not able to get results for the same. Lookup: | inputlookup temp.csv Result: arg1 arg2 16... See more...
Hi team, I am trying to send earliest and latest time values from lookup to saved search but i am not able to get results for the same. Lookup: | inputlookup temp.csv Result: arg1 arg2 1607395500 1607396400 1607395500 1607396400 1607395500 1607396400   Search with direct values: | inputlookup temp.csv | append [search index=abc earliest="1607395500.000" latest="1607396400.000"] Result: Getting proper result. search using lookup  fields: | inputlookup temp.csv | append [search index=abc earliest=arg1 latest=arg2] Result: Invalid value "arg1" for time term 'earliest' and "arg2" for time term 'latest'   Note: Let me know if you need any other from my side.
Greetings- I'm putting together a dashboard query that shows uid's and systems as a result. I would like to resolve that uid to a username, so the the dashboard output will be username and system. I... See more...
Greetings- I'm putting together a dashboard query that shows uid's and systems as a result. I would like to resolve that uid to a username, so the the dashboard output will be username and system. I have written a python script that when passed uid will return the username. What I'm stumbling with is calling it correctly and using the output. I've tried calling it as a script and a lookup, verified it is running but can't get it to do what I want. Can someone give me a shove in the right direction please?
Hi, I'm trying to get logs from a vCenter server using the DCN. On the Collection Configuration page, I'm trying to add a new DCN (and or) VC, but after filling all the relevant fields, I can see a... See more...
Hi, I'm trying to get logs from a vCenter server using the DCN. On the Collection Configuration page, I'm trying to add a new DCN (and or) VC, but after filling all the relevant fields, I can see a "loading" pop-up for last than a second, and then the pop-up just disappears and no any DCN or VC is added.  Does anyone saw this behavior before?
Hi Everyone, How can I extract the Below word OutOfMemoryError from the splunk losg 2021-03-24T09:01:32.357185211Z app_name=dgfassetmutation environment=e1 ns=blazepsfsubscribememsql-c2 pod_contain... See more...
Hi Everyone, How can I extract the Below word OutOfMemoryError from the splunk losg 2021-03-24T09:01:32.357185211Z app_name=dgfassetmutation environment=e1 ns=blazepsfsubscribememsql-c2 pod_container=dgfassetmutation pod_name=dgfassetmutation-deployment-3-p24np stream=stdout message=Terminating due to java.lang.OutOfMemoryError: Metaspace   2021-03-03T12:45:30.036179788Z app_name=pulldataoneforce environment=e1 ns=blazepsfpublish pod_container=pulldataoneforce pod_name=pulldataoneforce-deployment-175-kv9tv stream=stdout message=Caused by: java.lang.OutOfMemoryError: unable to create new native thread Thanks in advance  
Hello, I am not sure what exactly happen when the trial come to an end? Will I be downgraded to some kind of free version? I am mainly testing Splunk and Cloudflare integration, and this is what I... See more...
Hello, I am not sure what exactly happen when the trial come to an end? Will I be downgraded to some kind of free version? I am mainly testing Splunk and Cloudflare integration, and this is what I need mainly Splunk for. My traffic is very very small as it is just for testing purposes. Thanks a lot in advance.
Hi Everyone, How can I extract the below word from the splunk(Null Pointer Exception) logs: 2021-03-19T06:53:54.98455654Z app_name=data-graph-acct environment=e1 ns=sidh-datagraph3 pod_container=da... See more...
Hi Everyone, How can I extract the below word from the splunk(Null Pointer Exception) logs: 2021-03-19T06:53:54.98455654Z app_name=data-graph-acct environment=e1 ns=sidh-datagraph3 pod_container=data-graph-acct pod_name=data-graph-acct-deployment-257-4w2f5 stream=stdout message=Caused by: java.lang.NullPointerException: null 2021-03-19T06:53:54.984026525Z app_name=data-graph-acct environment=e1 ns=sidh-datagraph3 pod_container=data-graph-acct pod_name=data-graph-acct-deployment-257-4w2f5 stream=stdout message=org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.NullPointerException 2021-03-19T06:53:54.983956753Z app_name=data-graph-acct environment=e1 ns=sidh-datagraph3 pod_container=data-graph-acct pod_name=data-graph-acct-deployment-257-4w2f5 stream=stdout message=2021-03-18 23:53:54.983 ERROR [dgfaccountnode,bb8854c76341f426,bb8854c76341f426,true] 67 --- [nio-8443-exec-9] c.a.s.d.a.filter.AccountNodeFilter : dgfAccountNodeException=Request processing failed; nested exception is java.lang.NullPointerException   Can someone guide me how to extract NullPointerException word from splunk logs
Hello dear community, help me on this issue please. When using the concurrency command to find out if transactions overlap in time, and if so, is it possible to calculate the total duration of the ... See more...
Hello dear community, help me on this issue please. When using the concurrency command to find out if transactions overlap in time, and if so, is it possible to calculate the total duration of the incident taking the overlap into account. for example : transaction 1: start -> 10 a.m. end -> 11 a.m. Transaction 2: start -> 10:30 am end -> 11:30 am transaction 1 concerns process1 and transaction 2 concern porcess2 but the two transactions correspond to the same application X before, to calculate the total duration of the incident on application X I added the duration of transaction 1 + the duration of transaction 2. this is the correct way when incidents (transactions) do not overlap, but when they overlap as in the previous example. the total incident duration of the application is equal to 1h30 and not to 2h. using concurrency command can we calculate this duration?
Hi, We have a search head cluster of 8 members in which KV store is failing frequently. We used to start services manually. I'd like to create a report which should contains when exactly  kv store ... See more...
Hi, We have a search head cluster of 8 members in which KV store is failing frequently. We used to start services manually. I'd like to create a report which should contains when exactly  kv store got failed & when it got up. I am not sure in which logs we can find this info.  Could anyone help me with the query for same ? Thanks