All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

If I save the data, will it be updated if the same data is included? Or is there no change?
We are trying to create a query to get list of fields in all sourcetypes grouped by sourcetype and index.  We tried to use following query but it's performance is very slow.  | tstats count WHERE... See more...
We are trying to create a query to get list of fields in all sourcetypes grouped by sourcetype and index.  We tried to use following query but it's performance is very slow.  | tstats count WHERE index IN(main,_introspection) GROUPBY index, sourcetype | rename index AS indexname, sourcetype AS sourcetypename | map maxsearches=100 search="| search index=\"$indexname$\" sourcetype=\"$sourcetypename$\" | head 1|fieldsummary | eval index=\"$indexname$\", sourcetype=\"$sourcetypename$\" | WHERE NOT isnull(mean) | fields index, sourcetype, field" Since there can be any number of sourcetypes(350+ for index=main), maxsearches cannot be set to such a high number. Is there any way to optimize this query to increase performance or any other query that will do the job without any performance lag?
I have a table with the next information: Fecha 31/08/2022 16:16:43 31/08/2022 16:19:48 31/08/2022 16:16:34 31/08/2022 16:16:40 I now want to group these infor  by day and hour start and hour e... See more...
I have a table with the next information: Fecha 31/08/2022 16:16:43 31/08/2022 16:19:48 31/08/2022 16:16:34 31/08/2022 16:16:40 I now want to group these infor  by day and hour start and hour end,  for example: 31/08/2022 16:16:34 - 16:19:48 The query: index=o365 sourcetype=o365:management:activity Operation=UserLoginFailed user= |stats count, values(user) as Usuario by _time |eval Fecha = strftime(max(_time), "%d/%m/%Y %H:%M:%S") |rename count as Contador |sort -Contador |table Fecha, Usuario, Contador Can you help me, please?
Hello All I have been asked to show trends for business requirement with the dataset I have. Possible past, present and 'possible' predict for 3\4 months. the only _time dataset I have is the "Week... See more...
Hello All I have been asked to show trends for business requirement with the dataset I have. Possible past, present and 'possible' predict for 3\4 months. the only _time dataset I have is the "WeekStarting": where events have occurred. To make it more relatable I need to show trends in login sharing.  Due to the magnitude of data, to make more sense out of the values I have selected 3 quarters over 3 years i.e (WeekStarting="2020-07-20" OR WeekStarting="2020-08-24" OR WeekStarting="2020-09-28" OR WeekStarting="2021-07-26" OR WeekStarting="2021-08-23" OR WeekStarting="2021-09-20" OR WeekStarting="2022-06-20" OR WeekStarting="2022-07-18" OR WeekStarting="2022-08-22" ). Now I don't have any day or any time series data which makes it difficult for me to make timechart or timewrap commands.  What I have used so far and many others: index="AB" sourcetype="AB" | spath | search (WeekStarting="2020-07-20" OR WeekStarting="2020-08-24" OR WeekStarting="2020-09-28" OR WeekStarting="2021-07-26" OR WeekStarting="2021-08-23" OR WeekStarting="2021-09-20" OR WeekStarting="2022-06-20" OR WeekStarting="2022-07-18" OR WeekStarting="2022-08-22" ) | stats values(TotalUsers) , values(DeviceTypes{}), values(WeekStarting), sum(Newbrowsertypes) as Aggregate_Logins by AccountID | where Aggregate_Logins >=5 I do know these are not trend commands. But I am really lost as to how I can incorporate trends with the dataset. Please help!!
how do i list the events that in an array has more than 1 item? 1) a:[ {"data1":"abc"},{"data1":"def"}] 2) a:[ {"data1":"abc"}] 3) a:[ {"data1":"abc"},{"data1":"def"}] 4) a:[ {"data1":"abc"}]... See more...
how do i list the events that in an array has more than 1 item? 1) a:[ {"data1":"abc"},{"data1":"def"}] 2) a:[ {"data1":"abc"}] 3) a:[ {"data1":"abc"},{"data1":"def"}] 4) a:[ {"data1":"abc"}] i want to list only events 1 and 3.
Hi, I have a search query where a field is named "user_email". I also have a lookup table where I have a list of emails. Now I want my search query to only show results where "user_email" is pr... See more...
Hi, I have a search query where a field is named "user_email". I also have a lookup table where I have a list of emails. Now I want my search query to only show results where "user_email" is present in the lookup table that I have. What command is most appropriate for this? 
Hello going through documentation on how to deploy a multi site index cluster.  The documentation on the course though suggests completing the installation in this order License Master Cluster... See more...
Hello going through documentation on how to deploy a multi site index cluster.  The documentation on the course though suggests completing the installation in this order License Master Cluster Master Indexer Indexer Indexer Search Head Link them all together to a single site cluster.  It then suggests to complete a multi site cluster with the following commands ./splunk edit cluster-config -mode manager -multisite true -site site1 -available_sites site1,site2 -site_replication_factor origin:1,total:2 -site_search_factor origin:1,total:2 -replication_factor 1 -search_factor 1 -secret idxcluster ./splunk edit cluster-config -site site1 ./splunk restart ./splunk edit cluster-config -site site1 ./splunk restart etc Anyway, my question is, is there a short cut to configuring a multi site cluster without having to configure a single site cluster first and then moving to a multi site cluster? And if there is, what is the commands?  Many thanks 
Hi All, I have two set of logs as below and I want a create a table combining them.       Type1: Log1: MACHINE@|@Port@|@Country@|@Count MEMORY STATUS mwgcb-csrla01u|8070|EAS|5 CNF_| PASS|mw... See more...
Hi All, I have two set of logs as below and I want a create a table combining them.       Type1: Log1: MACHINE@|@Port@|@Country@|@Count MEMORY STATUS mwgcb-csrla01u|8070|EAS|5 CNF_| PASS|mwgcb-csrla01u PASS|mwgcb-csrla02u Type2: Log1: source.mq.apac.sg.cards.eas.eas.raw.int.rawevent RUNNING|mwgcb-csrla02u RUNNING|mwgcb-csrla01u RUNNING|mwgcb-csrla02u Log2: source.mq.apac.in.cards.eas.eas.raw.int.rawevent RUNNING|mwgcb-csrla01u FAILED|mwgcb-csrla02u NA Log3: source.mq.apac.my.cards.eas.eas.raw.int.rawevent FAILED|mwgcb-csrla02u RUNNING|mwgcb-csrla01u NA Log4: source.mq.apac.th.cards.eas.eas.raw.int.rawevent RUNNING|mwgcb-csrla01u RUNNING|mwgcb-csrla01u NA Log5: source.mq.apac.hk.cards.eas.eas.raw.int.rawevent UNASSIGNED|mwgcb-csrla01u RUNNING|mwgcb-csrla01u RUNNING|mwgcb-csrla02u       I  extracted the required fields from each of the log types and am trying to create a table with the fields Machine_Name, Port, Worker_Node, Connector_Count, Success_Count where Success_Count is the number of Connectors that are in RUNNING state for a Worker_Node. For e.g.: For the above set of logs the table should look like Machine_Name Port Worker_Node Connector_Count Success_Count mwgcb-csrla01u 8070 EAS 5 3 I tried to combine the two set of logs by creating a query as below but not successful in getting the above table.       | multisearch [ search index=ABC host=XYZ source=KLM | regex _raw="\w+\-\w+\|\d+" | rex field=_raw "(?P<Machine_Name>\w+\-\w+)\|(?P<Port>\d+)\|(?P<Worker_Node>\w+)\|(?P<Connector_Count>\d+)\s" ] [search index=ABC host=XYZ source=KLM | regex _raw!="\w+\-\w+\|\d+" | regex _raw!="properties" | regex _raw!="MACHINE" | regex _raw!="CONNECTOR_NAME" | regex _raw!="CNF" | regex _raw!="Detailed" | rex field=_raw "(?P<Connector_Name>(\w+\.){3,12}\w+)\s" | rex field=_raw "(?P<Connector_Name>(\w+\-){3,12}\w+)\s" | rex field=_raw "(\w+\.){3,12}\w+\s(?P<Connector_State>\w+)\|" | rex field=_raw "(\w+\-){3,12}\w+\s(?P<Connector_State>\w+)\|" | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|(?P<Worker_ID>\w+\-\w+)\s" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|(?P<Worker_ID>\w+\-\w+)\s" | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task1_State>\w+)\|" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task1_State>\w+)\|" | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker1_ID>\w+\-\w+)\s" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker1_ID>\w+\-\w+)\s" | replace "mwgcb-csrla01u_XX_" with "mwgcb-csrla01u" in Worker1_ID | replace "mwgcb-csrla02u_XX_" with "mwgcb-csrla02u" in Worker1_ID | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task2_State>\w+)" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})(?P<Task2_State>\w+)" | replace "NA" with "Not_Available" in Task2_State | rex field=_raw "(\w+\.){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker2_ID>\w+\-\w+)" | rex field=_raw "(\w+\-){3,12}\w+\s\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|\w+\-\w+\s((\_KK\_){0,1})\w+\|(?P<Worker2_ID>\w+\-\w+)" | replace "mwgcb-csrla01u_XX_" with "mwgcb-csrla01u" in Worker2_ID | replace "mwgcb-csrla02u_XX_" with "mwgcb-csrla02u" in Worker2_ID | fillnull value="Not_Available" Task1_State, Worker1_ID, Task2_State, Worker2_ID ] | lookup Worker_Connector_List.csv "Connector_Name" | search Worker_Node=EAS | stats latest(Connector_State) as Connector_State by Connector_Name | eval Status=if(Connector_State="RUNNING", "1","0") | stats sum(Status) as Success_Count | table Machine_Name,Port,Worker_Node,Connector_Count,Success_Count         Please help me to create/modify the query so that I can get the table in the desired manner.   Thank you All..!!
Hello, I'm trying to draw a cumulative timechart using a csv file which contains events, each event with its starting date and its ending date (basically three fields : "EventName", "StartingDate" ... See more...
Hello, I'm trying to draw a cumulative timechart using a csv file which contains events, each event with its starting date and its ending date (basically three fields : "EventName", "StartingDate" and "EndingDate"). The line in the chart should increase when an event starts and decrease when an event finishes. I attached an example of what I am trying to explain, hope it helps. I tried to create time ranges with the starting and ending date to draw the chart I want, but I'm not sure it's the correct way to do it... Thanks in advance
Our app is enclosed within a Docker container environment.  We can access the app only through standard web interfaces and APIs.  We have no access to the underlying operating s... See more...
Our app is enclosed within a Docker container environment.  We can access the app only through standard web interfaces and APIs.  We have no access to the underlying operating system.  So, through an API we retrieve the logs and store them on a remote server.  We unzip them, put them in the known paths, and the Splunk UF on that device forwards them to Splunk.   We retrieve our logs every hour.  They overwrite what is there.  This means that when seen by the Splunk UF, they appear to be new logs.  However, within them they are the same file, just with another hour of data in them.    Could you please advise on how to deal with those seemingly duplicate log information? Is there a way to work the results in a Splunk pipe search? Or should we adjust it in our log collection process before the Splunk UF send them to the Splunk Cloud Plattform?   Thank you.
I have two queries I am trying to join the results together. The first query has the organization details and the second query contains the contact details. I would like to join both organization and... See more...
I have two queries I am trying to join the results together. The first query has the organization details and the second query contains the contact details. I would like to join both organization and contact details into a single table. The first query:       index="prd-app" event_id="order_placed" sourcetype="data-app" product_id="27" origin="online123"       The results will look like this:       [2022-08-31 11:08:33.580780] [php:notice] pid=15226 cIP=10.10.10.10:56172 event_id="order_placed" app="application_name" log_level="INFO" order_id="123456789" acct_id="123456" user_id="147852" origin="online123" product_id="27" org_name="Example Inc" org_addr1="50th Avenue" org_city="New York" org_state="New York" org_zip="10001" org_country="us" transaction_id="68a26e21add3d5a34184c3e6fde2da6c"       I want to take the acct_id from the first query and use it in a secondary query. However, for the second query this value is not a field value, it's a substring within a json string. Second query:       index="prd-app" event_id="direct_proxy" sourcetype="org-api" "123456"       I would usually just append the acct_id value like above to get the results. In this case, i'll need the acct_id value to be dynamically added to the query. The results will look like this:       2022-08-31 11:08:33.580780 DEBUG 1 --- [nio-9005-exec-9] c.d.b.integrations.app.DirectProxy : transaction_id=68a26e21add3d5a34184c3e6fde2da6c event_id=direct_proxy result={"id":1680770,"account_id":123456,"name":"Example Inc","assumed_name":"","address":"50th Avenue","address2":"","city":"New York","state":"New York","zip":"10001","country":"us","email":"","telephone":"","risk_score":0,"registration_number":"","jurisdiction_city":"","jurisdiction_state":"","jurisdiction_country":"","incorporating_agency":"","contacts":[{"id":147852,"type":"tech","first_name":"Bill","last_name":"Jones","job_title":"Director","email":"bill.jones@example.com","telephone":"","fax":""}]}       Note; the acct_id value is within the Json string, I want to capture the entire result field containing the json string and make that a separate field value combined with the results from the first query. The table should combine fields from both results: From query 1: order_id, acct_id, user_id, origin, product_id, org_name, org_addr1, org_city, org_state, org_zip, org_country From query 2: result
I have a problem triggering an alert on a splunk request based on a cron job that runs this way: Search query: index=pdx_pfmseur0_fxs_event sourcetype=st_xfmseur0_fxs_event | eval track... See more...
I have a problem triggering an alert on a splunk request based on a cron job that runs this way: Search query: index=pdx_pfmseur0_fxs_event sourcetype=st_xfmseur0_fxs_event | eval trackingid=mvindex('DOC.doc_keylist.doc_key.key_val',mvfind('DOC.doc_keylist.doc_key.key_name', "MCH-TrackingID")) | rename gxsevent.gpstatusruletracking.eventtype as events_found | rename file.receiveraddress as receiveraddress | rename file.aprf as AJRF | table trackingid events_found source receiveraddress AJRF | stats values(trackingid) as trackingid, values(events_found) as events_found, values(receiveraddress) as receiveraddress, values(AJRF) as AJRF by source | stats values(events_found) as events_found, values(receiveraddress) as receiveraddress, values(AJRF) as AJRF by trackingid | search AJRF=ORDERS2 OR AJRF=ORDERS1 | stats count as total | appendcols [search index= idx_pk8seur2_logs sourcetype="kube:container:8wj-order-service" processType=avro-order-create JPABS | stats dc(nativeId) as rush ] | appendcols [search index= idx_pk8seur2_logs sourcetype="kube:container:9wj-order-avro-consumer" flowName=9wj-order-avro-consumer customer="AB" (message="HBKK" OR message="MANU") | stats count as hbkk] | eval gap = total-hbkk-rush | table gap, total, rush | eval status=if(gap>0, "OK", "KO") | eval ressource="FME-FME-R:AB" | eval service_offring="FME-FME-R" | eval description="JPEDI - Customer AB has an Order Gap \n \nDetail : JPEDI - Customer AB has an Order Gap is now :" + gap + "\n\n\n\n;support_group=AL-XX-MAI-L2;KB=KB0078557" | table ressource description gap total rush description service_offringe_offring ​ cronjob make on this alerte   I received three alerts containing the same result according to cron job   17H50 18H50 21H50 with same result of gap=9   is there a solution to limit the alert triggering just for once for each time interval from 08:50 => 10:50 from 10:50 a.m. => 3:50 p.m. from 3:50 p.m. to 9:50 p.m.
Task:- Need to identify what all Mcafee A.V agents have latest updates happening work done:- 1)Created a lookup and added all the unique source IP, total 54 2) Created a search to lookup for on... See more...
Task:- Need to identify what all Mcafee A.V agents have latest updates happening work done:- 1)Created a lookup and added all the unique source IP, total 54 2) Created a search to lookup for only the mcafee agents that have been updated and added a value 0 for tracking and then used join statement to merget it with lookup created earlier with value 1. Problem statement:- I am looking for srcip/agents that are not update i.e not present in the logs but present in the lookup and its not showing me the result but when i want to do the otherway around i.e looking for common srcip/agent in both lookup and search logs. PFA snaps Please help me rectify the query as per snap 2:- Non common values commmon entry non common entry lookup value Please refer the   
hi community i'd like to i tried to limit a HEC input to a certain index i have observed unexpected behavior in several cases. the configuration on the heavy forwarder goes a follows [http:... See more...
hi community i'd like to i tried to limit a HEC input to a certain index i have observed unexpected behavior in several cases. the configuration on the heavy forwarder goes a follows [http://hec_instance] description = HEC input for customer1 disabled = 0 index = customer1 indexes = [customer1] token = XXXXXXXX i am loading test events in json format using curl case 1)  the  destination index is specified correctly in the metadata  curl -k "https://hfhost:8088/services/collector/event" -H "Authorization: Splunk XXXXXXXX" -d '{"event": "Hello, world!", "sourcetype": "cool-fields", "index": "customer1", "fields": {"device": "macbook", "users": ["joe", "bob"]}}' observed outcome: event ends up in the correct index => expected behavior   case 2) the destination index specified in the metadata does not exist on the indexer cluster curl -k "https://hfhost:8088/services/collector/event" -H "Authorization: Splunk XXXXXXXX" -d '{"event": "Hello, world!", "sourcetype": "cool-fields", "index": "customer2", "fields": {"device": "macbook", "users": ["joe", "bob"]}}' observed outcome:  events end up nowhere, the HF does not bother, sends return code 0 as if everything was fine the indexer (not the HF) creates a message that there were incoming events for a non existing index => not  expected behavior case 3) the destination index is specified in the metadata, it is an existing index but it is not specified in the 'indexes' list for this HEC definition  curl -k "https://hfhost:8088/services/collector/event" -H "Authorization: Splunk XXXXXXXX" -d '{"event": "Hello, world!", "sourcetype": "cool-fields", "index": "main", "fields": {"device": "macbook", "users": ["joe", "bob"]}}' observed outcome:  no error is generated event is send to the 'main' index => not expected behavior case 4) the destination index is not specified in the metadata curl -k "https://hfhost:8088/services/collector/event" -H "Authorization: Splunk XXXXXXXX" -d '{"event": "Hello, world!", "sourcetype": "cool-fields", "fields": {"device": "macbook", "users": ["joe", "bob"]}}' no error is generated event is send to the 'main' index => not expected behavior has someone observed this before? is there something stupid in my setup that i fail to see?   additional info: the  HF is running  splunk 8.2.2 the indexers are clustered an configured via cluster manager thanks in advance for any info putting light into this regards Carlo
Further to my previous post here, which was generously solved by ITWhisperer: Solved: Help with search to use for dashboard - link key-v... - Splunk Community My chart looks like this (which is wha... See more...
Further to my previous post here, which was generously solved by ITWhisperer: Solved: Help with search to use for dashboard - link key-v... - Splunk Community My chart looks like this (which is what I wanted to achieve).   My challenge now is to have charts which: have the mac_address as a variable rather than fixed so that it's more flexible can we read this from the index rather than have to type it? I have a dashboard that uses the a hostname in this way (syntax below) show the line from multiple devices - stats from (mac_address_1 AND mac_address_2 AND ... (up to mac_address_x)on the same chart option to drop down menu to choose to display either mac_address_1 OR mac_address_2 OR ... (up to mac_address_x) Again, any help much appreciated. NM Current Search: | where key="counter_01" AND mac_address="xx:yy:zz:aa:bb:01" | timechart values(value) by key   Sample search which allows me to view via a variable (hostname). Note - this is an unrelated project - I'm just using for illustration: host=$host_name$ source="xxx"| timechart avg(value 1) as "Avg Value 1" avg(value_2) as "Avg Value 2" by host One issue I see is that I already have a "by" defined  in this project, which is "by key".    
Hello Splunkers,   I was wondering if there is a way to get the creation date of a correlation search.  If so, what is it, because I found nothing anywhere.   Thanks in advance, Best rega... See more...
Hello Splunkers,   I was wondering if there is a way to get the creation date of a correlation search.  If so, what is it, because I found nothing anywhere.   Thanks in advance, Best regards! 
Hello everyone, Can not find how I may move all values from a column(Total), one row up, in a table   This is my current scenario Day Total Monday   Tuesday 2 We... See more...
Hello everyone, Can not find how I may move all values from a column(Total), one row up, in a table   This is my current scenario Day Total Monday   Tuesday 2 Wednesday 3 Thursday 4 Friday 5 Saturday 6 Sunday 7   This is my desired scenario: Day Total Monday 2 Tuesday 3 Wednesday 4 Thursday 5 Friday 6 Saturday 7 Sunday     Can anyone help me please? Thanks in advance
Hi All,  I want to create a use case where the account is inactive for 60 days and it got enable after 60 days..  I tied to draw ta logic but not sure whether query is correct or not. Can s... See more...
Hi All,  I want to create a use case where the account is inactive for 60 days and it got enable after 60 days..  I tied to draw ta logic but not sure whether query is correct or not. Can somebody please modify the query if it required some change  index=wineventlog EventCode=4624 user=”*@xyz.com" earliest= -60d latest = now() | transaction user maxspan=60d search (EventCode!=)   Thank you 
In the User Role Authority settings, the following two setting items are enabled by default. What happens if I disable each status? <Capability name> (1)list_all_objects (2)rest_apps_view Whe... See more...
In the User Role Authority settings, the following two setting items are enabled by default. What happens if I disable each status? <Capability name> (1)list_all_objects (2)rest_apps_view When I disabled (1) above, the entire account name in the menu bar in the upper right corner of the screen disappeared. Please let me know if this behavior is correct.
userロール権限設定について、以下2つの設定項目はデフォルトで有効になっていますが、これらを無効にするとそれぞれどのような制御が働くのか、詳細をご教示いただけますでしょうか。 <対象> ①list_all_objects ②rest_apps_view ※上記①を無効にした場合、画面右上メニューバーのアカウント名が丸ごと消えたのですがこれは想定通りの挙動なのか併せてご教示いただけますと... See more...
userロール権限設定について、以下2つの設定項目はデフォルトで有効になっていますが、これらを無効にするとそれぞれどのような制御が働くのか、詳細をご教示いただけますでしょうか。 <対象> ①list_all_objects ②rest_apps_view ※上記①を無効にした場合、画面右上メニューバーのアカウント名が丸ごと消えたのですがこれは想定通りの挙動なのか併せてご教示いただけますと幸いです。