All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two queries I am trying to join the results together. The first query has the organization details and the second query contains the contact details. I would like to join both organization and... See more...
I have two queries I am trying to join the results together. The first query has the organization details and the second query contains the contact details. I would like to join both organization and contact details into a single table. The first query:       index="prd-app" event_id="order_placed" sourcetype="data-app" product_id="27" origin="online123"       The results will look like this:       [2022-08-31 11:08:33.580780] [php:notice] pid=15226 cIP=10.10.10.10:56172 event_id="order_placed" app="application_name" log_level="INFO" order_id="123456789" acct_id="123456" user_id="147852" origin="online123" product_id="27" org_name="Example Inc" org_addr1="50th Avenue" org_city="New York" org_state="New York" org_zip="10001" org_country="us" transaction_id="68a26e21add3d5a34184c3e6fde2da6c"       I want to take the acct_id from the first query and use it in a secondary query. However, for the second query this value is not a field value, it's a substring within a json string. Second query:       index="prd-app" event_id="direct_proxy" sourcetype="org-api" "123456"       I would usually just append the acct_id value like above to get the results. In this case, i'll need the acct_id value to be dynamically added to the query. The results will look like this:       2022-08-31 11:08:33.580780 DEBUG 1 --- [nio-9005-exec-9] c.d.b.integrations.app.DirectProxy : transaction_id=68a26e21add3d5a34184c3e6fde2da6c event_id=direct_proxy result={"id":1680770,"account_id":123456,"name":"Example Inc","assumed_name":"","address":"50th Avenue","address2":"","city":"New York","state":"New York","zip":"10001","country":"us","email":"","telephone":"","risk_score":0,"registration_number":"","jurisdiction_city":"","jurisdiction_state":"","jurisdiction_country":"","incorporating_agency":"","contacts":[{"id":147852,"type":"tech","first_name":"Bill","last_name":"Jones","job_title":"Director","email":"bill.jones@example.com","telephone":"","fax":""}]}       Note; the acct_id value is within the Json string, I want to capture the entire result field containing the json string and make that a separate field value combined with the results from the first query. The table should combine fields from both results: From query 1: order_id, acct_id, user_id, origin, product_id, org_name, org_addr1, org_city, org_state, org_zip, org_country From query 2: result
I have a problem triggering an alert on a splunk request based on a cron job that runs this way: Search query: index=pdx_pfmseur0_fxs_event sourcetype=st_xfmseur0_fxs_event | eval track... See more...
I have a problem triggering an alert on a splunk request based on a cron job that runs this way: Search query: index=pdx_pfmseur0_fxs_event sourcetype=st_xfmseur0_fxs_event | eval trackingid=mvindex('DOC.doc_keylist.doc_key.key_val',mvfind('DOC.doc_keylist.doc_key.key_name', "MCH-TrackingID")) | rename gxsevent.gpstatusruletracking.eventtype as events_found | rename file.receiveraddress as receiveraddress | rename file.aprf as AJRF | table trackingid events_found source receiveraddress AJRF | stats values(trackingid) as trackingid, values(events_found) as events_found, values(receiveraddress) as receiveraddress, values(AJRF) as AJRF by source | stats values(events_found) as events_found, values(receiveraddress) as receiveraddress, values(AJRF) as AJRF by trackingid | search AJRF=ORDERS2 OR AJRF=ORDERS1 | stats count as total | appendcols [search index= idx_pk8seur2_logs sourcetype="kube:container:8wj-order-service" processType=avro-order-create JPABS | stats dc(nativeId) as rush ] | appendcols [search index= idx_pk8seur2_logs sourcetype="kube:container:9wj-order-avro-consumer" flowName=9wj-order-avro-consumer customer="AB" (message="HBKK" OR message="MANU") | stats count as hbkk] | eval gap = total-hbkk-rush | table gap, total, rush | eval status=if(gap>0, "OK", "KO") | eval ressource="FME-FME-R:AB" | eval service_offring="FME-FME-R" | eval description="JPEDI - Customer AB has an Order Gap \n \nDetail : JPEDI - Customer AB has an Order Gap is now :" + gap + "\n\n\n\n;support_group=AL-XX-MAI-L2;KB=KB0078557" | table ressource description gap total rush description service_offringe_offring ​ cronjob make on this alerte   I received three alerts containing the same result according to cron job   17H50 18H50 21H50 with same result of gap=9   is there a solution to limit the alert triggering just for once for each time interval from 08:50 => 10:50 from 10:50 a.m. => 3:50 p.m. from 3:50 p.m. to 9:50 p.m.
Task:- Need to identify what all Mcafee A.V agents have latest updates happening work done:- 1)Created a lookup and added all the unique source IP, total 54 2) Created a search to lookup for on... See more...
Task:- Need to identify what all Mcafee A.V agents have latest updates happening work done:- 1)Created a lookup and added all the unique source IP, total 54 2) Created a search to lookup for only the mcafee agents that have been updated and added a value 0 for tracking and then used join statement to merget it with lookup created earlier with value 1. Problem statement:- I am looking for srcip/agents that are not update i.e not present in the logs but present in the lookup and its not showing me the result but when i want to do the otherway around i.e looking for common srcip/agent in both lookup and search logs. PFA snaps Please help me rectify the query as per snap 2:- Non common values commmon entry non common entry lookup value Please refer the   
hi community i'd like to i tried to limit a HEC input to a certain index i have observed unexpected behavior in several cases. the configuration on the heavy forwarder goes a follows [http:... See more...
hi community i'd like to i tried to limit a HEC input to a certain index i have observed unexpected behavior in several cases. the configuration on the heavy forwarder goes a follows [http://hec_instance] description = HEC input for customer1 disabled = 0 index = customer1 indexes = [customer1] token = XXXXXXXX i am loading test events in json format using curl case 1)  the  destination index is specified correctly in the metadata  curl -k "https://hfhost:8088/services/collector/event" -H "Authorization: Splunk XXXXXXXX" -d '{"event": "Hello, world!", "sourcetype": "cool-fields", "index": "customer1", "fields": {"device": "macbook", "users": ["joe", "bob"]}}' observed outcome: event ends up in the correct index => expected behavior   case 2) the destination index specified in the metadata does not exist on the indexer cluster curl -k "https://hfhost:8088/services/collector/event" -H "Authorization: Splunk XXXXXXXX" -d '{"event": "Hello, world!", "sourcetype": "cool-fields", "index": "customer2", "fields": {"device": "macbook", "users": ["joe", "bob"]}}' observed outcome:  events end up nowhere, the HF does not bother, sends return code 0 as if everything was fine the indexer (not the HF) creates a message that there were incoming events for a non existing index => not  expected behavior case 3) the destination index is specified in the metadata, it is an existing index but it is not specified in the 'indexes' list for this HEC definition  curl -k "https://hfhost:8088/services/collector/event" -H "Authorization: Splunk XXXXXXXX" -d '{"event": "Hello, world!", "sourcetype": "cool-fields", "index": "main", "fields": {"device": "macbook", "users": ["joe", "bob"]}}' observed outcome:  no error is generated event is send to the 'main' index => not expected behavior case 4) the destination index is not specified in the metadata curl -k "https://hfhost:8088/services/collector/event" -H "Authorization: Splunk XXXXXXXX" -d '{"event": "Hello, world!", "sourcetype": "cool-fields", "fields": {"device": "macbook", "users": ["joe", "bob"]}}' no error is generated event is send to the 'main' index => not expected behavior has someone observed this before? is there something stupid in my setup that i fail to see?   additional info: the  HF is running  splunk 8.2.2 the indexers are clustered an configured via cluster manager thanks in advance for any info putting light into this regards Carlo
Further to my previous post here, which was generously solved by ITWhisperer: Solved: Help with search to use for dashboard - link key-v... - Splunk Community My chart looks like this (which is wha... See more...
Further to my previous post here, which was generously solved by ITWhisperer: Solved: Help with search to use for dashboard - link key-v... - Splunk Community My chart looks like this (which is what I wanted to achieve).   My challenge now is to have charts which: have the mac_address as a variable rather than fixed so that it's more flexible can we read this from the index rather than have to type it? I have a dashboard that uses the a hostname in this way (syntax below) show the line from multiple devices - stats from (mac_address_1 AND mac_address_2 AND ... (up to mac_address_x)on the same chart option to drop down menu to choose to display either mac_address_1 OR mac_address_2 OR ... (up to mac_address_x) Again, any help much appreciated. NM Current Search: | where key="counter_01" AND mac_address="xx:yy:zz:aa:bb:01" | timechart values(value) by key   Sample search which allows me to view via a variable (hostname). Note - this is an unrelated project - I'm just using for illustration: host=$host_name$ source="xxx"| timechart avg(value 1) as "Avg Value 1" avg(value_2) as "Avg Value 2" by host One issue I see is that I already have a "by" defined  in this project, which is "by key".    
Hello Splunkers,   I was wondering if there is a way to get the creation date of a correlation search.  If so, what is it, because I found nothing anywhere.   Thanks in advance, Best rega... See more...
Hello Splunkers,   I was wondering if there is a way to get the creation date of a correlation search.  If so, what is it, because I found nothing anywhere.   Thanks in advance, Best regards! 
Hello everyone, Can not find how I may move all values from a column(Total), one row up, in a table   This is my current scenario Day Total Monday   Tuesday 2 We... See more...
Hello everyone, Can not find how I may move all values from a column(Total), one row up, in a table   This is my current scenario Day Total Monday   Tuesday 2 Wednesday 3 Thursday 4 Friday 5 Saturday 6 Sunday 7   This is my desired scenario: Day Total Monday 2 Tuesday 3 Wednesday 4 Thursday 5 Friday 6 Saturday 7 Sunday     Can anyone help me please? Thanks in advance
Hi All,  I want to create a use case where the account is inactive for 60 days and it got enable after 60 days..  I tied to draw ta logic but not sure whether query is correct or not. Can s... See more...
Hi All,  I want to create a use case where the account is inactive for 60 days and it got enable after 60 days..  I tied to draw ta logic but not sure whether query is correct or not. Can somebody please modify the query if it required some change  index=wineventlog EventCode=4624 user=”*@xyz.com" earliest= -60d latest = now() | transaction user maxspan=60d search (EventCode!=)   Thank you 
In the User Role Authority settings, the following two setting items are enabled by default. What happens if I disable each status? <Capability name> (1)list_all_objects (2)rest_apps_view Whe... See more...
In the User Role Authority settings, the following two setting items are enabled by default. What happens if I disable each status? <Capability name> (1)list_all_objects (2)rest_apps_view When I disabled (1) above, the entire account name in the menu bar in the upper right corner of the screen disappeared. Please let me know if this behavior is correct.
userロール権限設定について、以下2つの設定項目はデフォルトで有効になっていますが、これらを無効にするとそれぞれどのような制御が働くのか、詳細をご教示いただけますでしょうか。 <対象> ①list_all_objects ②rest_apps_view ※上記①を無効にした場合、画面右上メニューバーのアカウント名が丸ごと消えたのですがこれは想定通りの挙動なのか併せてご教示いただけますと... See more...
userロール権限設定について、以下2つの設定項目はデフォルトで有効になっていますが、これらを無効にするとそれぞれどのような制御が働くのか、詳細をご教示いただけますでしょうか。 <対象> ①list_all_objects ②rest_apps_view ※上記①を無効にした場合、画面右上メニューバーのアカウント名が丸ごと消えたのですがこれは想定通りの挙動なのか併せてご教示いただけますと幸いです。
Hi Splunkers, I've installed both Add-on for VMware metrics collector and Add-on for Unix and Linux. I noticed that  the same host collected by the two add-on is managed in different ways. Is ther... See more...
Hi Splunkers, I've installed both Add-on for VMware metrics collector and Add-on for Unix and Linux. I noticed that  the same host collected by the two add-on is managed in different ways. Is there a best pratice to follow in order to merge data or at least to tell ITSI we're talking about the same host? I've already tried to merge entities but without finding an acceptable solution. Can someone help me on this topic?
Hi  I want to create a splunk use case like a after getting 3 times failure the account again got enable..  I was working n below is my query but it is giving me 0 result can you please help me t... See more...
Hi  I want to create a splunk use case like a after getting 3 times failure the account again got enable..  I was working n below is my query but it is giving me 0 result can you please help me to modify the query    source=WinEventLog:Security (EventCode=4625 OR EventCode=4624) | eval username=mvindex(Account_Name, 1) | streamstats count(eval(match(EventCode, "4625"))) as Failed, count(eval(match(EventCode, "4624"))) as Success reset_on_change=true by username | eval alert=if(Failed>3, "yes", "no") | where Failed > 3 | eval newname=username, newhost=host | where (Success > 1 AND host=newhost AND username=newname) | eval end_alert="YES" | table _time, username, host, Failed, Success, alert, newname, newhost, end_alert   Thanks 
Hello folks, I have Logger lines as below: job MONITOR-DESYNC-3-20I-ERNC: { "chain":"PR1", "nbProperties":1345, "propertyStartCount":1, "nbPropertyPerExecution":5, "propertyEndCount":6, "nbProperty... See more...
Hello folks, I have Logger lines as below: job MONITOR-DESYNC-3-20I-ERNC: { "chain":"PR1", "nbProperties":1345, "propertyStartCount":1, "nbPropertyPerExecution":5, "propertyEndCount":6, "nbPropertyForCurrentExecution":5 } job MONITOR-DESYNC-3-20I-ERNC: { "chain":"PR2", "nbProperties":1345, "propertyStartCount":6, "nbPropertyPerExecution":5, "propertyEndCount":11, "nbPropertyForCurrentExecution":5 } ------These lines continue till propertyEndCount = nbProperties but sometimes it does not get equal  and stops randomly like below. This job stopped at "propertyEndCount":1076 only job MONITOR-DESYNC-3-6AQ-Q7Z: { "chain":"PR1", "nbProperties":1345, "propertyStartCount":1071, "nbPropertyPerExecution":5, "propertyEndCount":1076, "nbPropertyForCurrentExecution":5 } SPlunk query to find how many hotels got covered  for each chain . In this case Output Expected is: chain total-property  covered-property      PR1         1345                      1076      PR2          1345                   1000 I am quite new to splunk query. I think If somehow  I could fetch the value of propertyEndCount from the last event then it should work. If anyone can provide some solution to get as expected result mentioned above. Thanks in Advance.  
Hi, We are facing error when a SPL with dbxquery is run on splunk. The strange thing is the issue is intermittent and we checked the internal logs and found it only occured on just one search heads... See more...
Hi, We are facing error when a SPL with dbxquery is run on splunk. The strange thing is the issue is intermittent and we checked the internal logs and found it only occured on just one search heads out of 5 in cluster. I am not sure why this is happening and what I can check at splunk end to fix this.   Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 127
Hi, I want to create an alert on traffic drop deviation. Something like if the traffic drop by 50% than what was it in last hour or if the traffic drops to zero, then I want the alert triggered. Cr... See more...
Hi, I want to create an alert on traffic drop deviation. Something like if the traffic drop by 50% than what was it in last hour or if the traffic drops to zero, then I want the alert triggered. Creating alert on 0 traffic is easy but that could give false positives as well so I am trying to find a way to alert only if there is a significant deviation. Is that possible? I have this query at the moment which looks into the incoming requests. I can run the alert every 15 or 30 minutes and want to trigger if there is a deviation.   index=myapp_prod "message.logPoint"=INCOMING_REQUEST | timechart span=30m count   Best Regards, Shashank
Hello Everyone, I have two queries  to exclude events one using NOT and other one using IN, both the queries returning same results but the query using NOT command takes less time. My question is i... See more...
Hello Everyone, I have two queries  to exclude events one using NOT and other one using IN, both the queries returning same results but the query using NOT command takes less time. My question is it should be other way around, why NOT is take less time to execute.       Time taken by Splunk using IN query Time taken by Splunk using NOT query     index = "some_index" sourcetype="some_sourec_type" app_code=XXXX a_status IN (0,1,40) AND b_status IN (2,1,10,20) index = "some_index" sourcetype="some_sourec_type" app_code=XXXX NOT a_status IN (0, -1, -2, -5) NOT b_status IN (-1, -6, -5, null)
I have installedAt field which gives the application's installation time. If I run a Splunk search for the last 7 days it shows the application installed at different times. So I want the query t... See more...
I have installedAt field which gives the application's installation time. If I run a Splunk search for the last 7 days it shows the application installed at different times. So I want the query to find the applications installed in the last 7 days.    
Hi Splunk community, I want to chart the data retrieved from index, filter the app_name field to match with ones in the lookup file. There will be some app_name values in lookup file not in the ind... See more...
Hi Splunk community, I want to chart the data retrieved from index, filter the app_name field to match with ones in the lookup file. There will be some app_name values in lookup file not in the index, and they need to be added as new rows and labeled "Not executed" for their status. My SPL looks like below:     index="my_index" | search [ inputlookup my_lookup | table "App Name" | rename "App Name" as app_name] | table app_name stage_name stage_status | eval stage_name = "Stage - " + stage_name | rename app_name as App | chart values(stage_status) by App, stage_name useother=f limit=0     Here what I got: App Stage A Stage B Stage C Stage D App_A PASSED FAILED PASSED PASSED   And I want it to look like this: App Stage A Stage B Stage C Stage D App_A PASSED FAILED PASSED PASSED App_B Not executed Not executed Not executed Not executed ... Not executed Not executed Not executed Not executed   Please help and advise, Thanks!
Hi Folks -  I would appreciate some help to create a dashboard. I want a simple line chart that shows how a value changes over time.  My data comes from a csv file and in the csv, is in this format... See more...
Hi Folks -  I would appreciate some help to create a dashboard. I want a simple line chart that shows how a value changes over time.  My data comes from a csv file and in the csv, is in this format: timestamp mac_address key value 20220902-153931 xx:yy:zz:aa:bb:01 counter_01 246897 20220902-153931 xx:yy:zz:aa:bb:01 counter_02 1595   Further on in the same file we see the same keys for a different device by mac_address: timestamp mac_address key value 20220902-153931 xx:yy:zz:aa:bb:02 counter_01 600 20220902-153931 xx:yy:zz:aa:bb:02 counter_02 1350   This is how the data looks in search for a single device (identified by mac_address) and a single key (counter_01) with a value of 246897.     These values are pulled via a script which runs according to a schedule, so the index will contain updated data with a new timestamp. In all there are about 20 key/value matches per device per run of the script. What I would like to achieve: A simple line chart that shows the values for device 1, showing the counter_01 key and how the value changes over time. The problem I am having is understanding how to get the chart to identify the device and then show the right stat. Once I know how to do this I'm sure I can work out how to display the other values.  As always, I'm very grateful for any help. NM
Hi, i would to create a dashboard with event ID below to application usecube  4720 A user account was created. 4722 A user account was enabled. 4723 ... See more...
Hi, i would to create a dashboard with event ID below to application usecube  4720 A user account was created. 4722 A user account was enabled. 4723 An attempt was made to change an account's password. 4724 An attempt was made to reset an accounts password. 4725 A user account was disabled. 4726 A user account was deleted. 4738 A user account was changed. 4740 A user account was locked out. 4767 A user account was unlocked. 4780 The ACL was set on accounts which are members of administrators groups. 4781 The name of an account was changed:       It is possible to have a old and new value?   Thanks for your feedback. Best regards, Cédric