All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That's due to how Splunk searches its indexes. Unless the field is indexed and properly configured or you're searching with wildcards, Splunk will try to find the exact value you're searching for in ... See more...
That's due to how Splunk searches its indexes. Unless the field is indexed and properly configured or you're searching with wildcards, Splunk will try to find the exact value you're searching for in its index files. For example. I'm searching my home environment for index="winevents" EventCode=7040 A fairly simple search. When I look into job inspect and get the job log I see how Splunk executes the search against its indexed data: 01-12-2024 18:06:20.986 INFO UnifiedSearch [1919978 searchOrchestrator] - Expanded index search = (index="winevents" EventCode=7040) 01-12-2024 18:06:20.986 INFO UnifiedSearch [1919978 searchOrchestrator] - base lispy: [ AND 7040 index::winevents ] As you can see, Splunk didn't optimize the search (because there wasn't much to optimze - the search was very simple) but the resulting lispy search is looking literally for the value "7040" with metadata field of index equal to winevents (actually index is treated a bit different than other indexed fields but for us here it doesn't matter). Only after finding those events that do have the "7040" anywhere within their body, Splunk will try to parse out the EventCode field from them and will try to match that value (possibly numerically) to your argument.  
I have a lookup table I am using to pull in contact information based on correlation of a couple of fields. The way the lookup table is formatted, it makes my results look different than what I want ... See more...
I have a lookup table I am using to pull in contact information based on correlation of a couple of fields. The way the lookup table is formatted, it makes my results look different than what I want to see. If I can consolidate the lookup table, it will fix my issue, but I can't figure out how to do it. The table currently looks like this: Org Branch Role Name Org A Branch 1 President Jack Org A Branch 1 VP Jill Org A Branch 1 Manager Mary Org A Branch 2 President Hansel Org A Branch 2 VP Gretel Org A Branch 3 VP Mickey Org A Branch 3 Manager Minnie   I use the Org and Branch as matching criteria and want to pull out the names for each role.  I do not want to see multivalue fields when I am done, the current search looks like: [base search] | lookup orgchart Org Branch OUTPUTNEW Role | mvexpand Role | lookup orgchart Org Branch Role OUTPUTNEW Name This works, but the mvexpand (obviously) creates a new line for each role and I do not want multiple lines for each in my final results.  I want a single line for every Org/Branch pair showing all the Roles and names.  I am thinking the way of solving this is reformatting the lookup table to look like the table below, then modifying my lookup.  Is there a way to "transpose" just the 2 fields?  [base search] | lookup orgchart Org Branch OUTPUTNEW President, VP, Manager Org Branch President VP Manager Org A Branch 1 Jack Jill Mary Org A Branch 2 Hansel Gretel   Org A Branch 3   MIckey Minnie     Thank you!
It's generally not the best idea to manipulate structured data with regexes if you can use the built-in functionality for handling the structure - like spath command or auto-kv functionality. Even if... See more...
It's generally not the best idea to manipulate structured data with regexes if you can use the built-in functionality for handling the structure - like spath command or auto-kv functionality. Even if your data is guaranteed to be simple (you will never have an array or subobject as value), you don't have to worry of finding proper field boundaries, escaping and so on.
By default (unless the limit is reconfigured) join uses only 50000 results from the subsearch to join with the outer search and runs for only 60 seconds to generate those results. If your subsearch e... See more...
By default (unless the limit is reconfigured) join uses only 50000 results from the subsearch to join with the outer search and runs for only 60 seconds to generate those results. If your subsearch exceeds 60 seconds of execution time or generates more than 50k results, it's silently finalized and only the results returned so far (up to 50k) are used for join. With other subsearch uses the limits can be lower - even down to 10k results. That's one of the reasons subsearches are best avoided - since they are silently finalized, you're not getting any feedback that you're not getting full results from your search and you're not aware that your final results might be incomplete or plain wrong.
If a subsearch has more than 50,000 events or takes longer than 1 minute (i think) to run it will auto-finalize. Occurrence of either of these scenarios will cause the data returned from the subsearc... See more...
If a subsearch has more than 50,000 events or takes longer than 1 minute (i think) to run it will auto-finalize. Occurrence of either of these scenarios will cause the data returned from the subsearch to be truncated and incomplete. BTW I don't think the search you shared needs to use a join/subsearch, something like this will probably do the same thing. index="aaam_devops_elasticsearch_idx" ((project="Einstein360_TicketsCreated_ElasticSearch_20210419" AND "source.TransactionName"="ITGTicketCreated") OR (project="Einstein360_TruckRollCreated_ElasticSearch_20210420" AND "source.TransactionName"="Truck_Roll_Create_Result")) | timechart span=1d dc(eval(case('project'=="Einstein360_TicketsCreated_ElasticSearch_20210419" AND 'source.TransactionName'=="ITGTicketCreated", id))) as ITGTicketCreated, dc(eval(case('project'=="Einstein360_TruckRollCreated_ElasticSearch_20210420" AND 'source.TransactionName'=="Truck_Roll_Create_Result", id))) as TruckRollCreated
I don't understand these new requirements.  If AB is filtered out then it cannot be searched.  You cannot search for AM05 since it doesn't exist until the appendpipe command runs. What is the final ... See more...
I don't understand these new requirements.  If AB is filtered out then it cannot be searched.  You cannot search for AM05 since it doesn't exist until the appendpipe command runs. What is the final result expected to look like?
As Ismo said, it might have worked fairly OK but the rules for splitting roles between different hosts actually make sense. They let you easier scale each component according to your needs, you can -... See more...
As Ismo said, it might have worked fairly OK but the rules for splitting roles between different hosts actually make sense. They let you easier scale each component according to your needs, you can - if needed - plan for HA/DR, you can plan migrations easier and you can easier debug stuff since separate roles don't interfere with each other.  
If this is json, like you already has, it's easier and better to use spath to extract those.  Based on your screenshot you should have this already on field Properties.appHdr.bizMsgIdr . In that cas... See more...
If this is json, like you already has, it's easier and better to use spath to extract those.  Based on your screenshot you should have this already on field Properties.appHdr.bizMsgIdr . In that case you can try e.g  ... |rename Properties.appHdr.bizMsgIdr as bizMsgIdr if you really need to rename/use short version. Another option is use ... | eval bizMsgIdr = Properties.appHdr.bizMsgIdr  
Hi Splunkers, I would like to export logs (raw/csv) out of Splunk cloud periodically to send it to gcp pub/sub. How can I achieve this. Appreciate ideas here.
do we have an update to the version of Mule supported?
Hi this is quite interesting findings What you have in original log/event data? Is it "logid=13" or just 13 and it has extracted and named with search or ingestion time? Is there any mater how ... See more...
Hi this is quite interesting findings What you have in original log/event data? Is it "logid=13" or just 13 and it has extracted and named with search or ingestion time? Is there any mater how many zeroes you add as prefix? r. Ismo
Hi There some ways to get that information. Which one is best depends on your environment. I suppose that you have MC configured and in use? If not please take it into use and utilise it with this.... See more...
Hi There some ways to get that information. Which one is best depends on your environment. I suppose that you have MC configured and in use? If not please take it into use and utilise it with this. With MC you can get this information directly from  Settings -> MC -> Instances Other option is use rest query for getting this from all instances, but this needs that those are defined as search peer for node where you are running this. In MC you have done this already | rest splunk_server=* /services/server/info f=host f=host_fqdn f=version 3rd option is try to get this information from _internal log, but it needs that you have extended it's retention time enough for finding last start messages. By default it's retention is too short for this. r. Ismo 
This is not working and no result in the column issrDsclsrReqId.  Is it possible to extract the value of  "bizMsgIdr": from the field Properties.appHdr ?  Splunk COmmand :  `macro_events_prod_s... See more...
This is not working and no result in the column issrDsclsrReqId.  Is it possible to extract the value of  "bizMsgIdr": from the field Properties.appHdr ?  Splunk COmmand :  `macro_events_prod_srt_shareholders_esa` sourcetype ="mscs:azure:eventhub" Name="Received Disclosure Response Command" "res1caf3c2ac2b3b6d180ff0001aa7eefab".   Result in the column Properties.appHdr :  { "fr": { "fiId": { "finInstnId": { "bicfi": "BNPAGB22PBG" } } }, "to": { "fiId": { "finInstnId": { "bicfi": "SICVFRPPEII" } } }, "bizMsgIdr": "res1caf3c2ac2b3b6d180ff0001aa7eefab", "msgDefIdr": "seev.047.001.02", "creDt": "2024-01-11T21:03:56.000Z" }    
Hi why not this way? | makeresults | eval _raw = "{ \"shrhldrsIdDsclsrRspn\": { \"dsclsrRspnId\": \"0000537ede1c5e1084490000aa7eefab\", \"issrDsclsrReqRef\": { \"issrDsclsrReqId\": \"eiifr00000522... See more...
Hi why not this way? | makeresults | eval _raw = "{ \"shrhldrsIdDsclsrRspn\": { \"dsclsrRspnId\": \"0000537ede1c5e1084490000aa7eefab\", \"issrDsclsrReqRef\": { \"issrDsclsrReqId\": \"eiifr000005229220231229162227\", \"finInstrmId\": { \"isin\": \"FR0000052292\" }, \"shrhldrsDsclsrRcrdDt\": { \"dt\": { \"dt\": \"2023-12-29\" } } }, \"pgntn\": { \"lastPgInd\": true, \"pgNb\": \"1\" }, \"rspndgIntrmy\": { \"ctctPrsn\": { \"emailAdr\": \"ipb.asset.servicing@bnpparibas.com\", \"nm\": \"IPB ASSET SERVICING\" }, \"id\": { \"anyBIC\": \"BNPAGB22PBG\" }, \"nmAndAdr\": { \"adr\": { \"adrTp\": 0, \"bldgNb\": \"10\", \"ctry\": \"GB\", \"ctrySubDvsn\": \"LONDON\", \"pstCd\": \"NW16AA\", \"strtNm\": \"HAREWOOD AVENUE\", \"twnNm\": \"LONDON\" }, \"nm\": \"BNP PARIBAS PRIME BROKERAGE\" } } } }" ``` generate test data``` | spath | table shrhldrsIdDsclsrRspn.issrDsclsrReqRef.issrDsclsrReqId If needed you can also use spath function with eval. If you really want to use rex then this should work | makeresults | eval _raw = "{ \"shrhldrsIdDsclsrRspn\": { \"dsclsrRspnId\": \"0000537ede1c5e1084490000aa7eefab\", \"issrDsclsrReqRef\": { \"issrDsclsrReqId\": \"eiifr000005229220231229162227\", \"finInstrmId\": { \"isin\": \"FR0000052292\" }, \"shrhldrsDsclsrRcrdDt\": { \"dt\": { \"dt\": \"2023-12-29\" } } }, \"pgntn\": { \"lastPgInd\": true, \"pgNb\": \"1\" }, \"rspndgIntrmy\": { \"ctctPrsn\": { \"emailAdr\": \"ipb.asset.servicing@bnpparibas.com\", \"nm\": \"IPB ASSET SERVICING\" }, \"id\": { \"anyBIC\": \"BNPAGB22PBG\" }, \"nmAndAdr\": { \"adr\": { \"adrTp\": 0, \"bldgNb\": \"10\", \"ctry\": \"GB\", \"ctrySubDvsn\": \"LONDON\", \"pstCd\": \"NW16AA\", \"strtNm\": \"HAREWOOD AVENUE\", \"twnNm\": \"LONDON\" }, \"nm\": \"BNP PARIBAS PRIME BROKERAGE\" } } } }" ``` generate test data``` | rex "\"issrDsclsrReqId\"\s*:\s*\"(?<issrDsclsrReqId>[^\"]+)\"" | table issrDsclsrReqId  r. Ismo
Hi probably you should look data models and how those can use for get the same or even better search time? https://docs.splunk.com/Documentation/CIM/5.3.1/User/Overview https://docs.splunk.com/Do... See more...
Hi probably you should look data models and how those can use for get the same or even better search time? https://docs.splunk.com/Documentation/CIM/5.3.1/User/Overview https://docs.splunk.com/Documentation/Splunk/9.1.2/Knowledge/Aboutdatamodels There is also some DM trainings on EDU site. r. Ismo
Hello, I am using  an extract field at search time called "src_ip". To optimize search response times, I have create an indexed field extraction called "src_ip-index". How to "backendly" configure... See more...
Hello, I am using  an extract field at search time called "src_ip". To optimize search response times, I have create an indexed field extraction called "src_ip-index". How to "backendly" configure Splunk so end users will query only a single field which use both "src_ip-index" and "src_ip" , but use "src_ip-index" in priority when available due to better performance. hope it is clear enough. Best regards,
Good to hear that it has worked, but you must remember that if you have any cluster, shc, DS etc. issue and needs support from splunk you will be in a deep s.... And Splunk's 1st requirements will ... See more...
Good to hear that it has worked, but you must remember that if you have any cluster, shc, DS etc. issue and needs support from splunk you will be in a deep s.... And Splunk's 1st requirements will be that you must fix your deployment architecture first and after that they can start to support it.
Hello I want to extract the field issrDsclsrReqId" using the Rex command.  Can someone please help me with the command to extract the value of field bizMsgIdr  which is eiifr00000522922023122916222... See more...
Hello I want to extract the field issrDsclsrReqId" using the Rex command.  Can someone please help me with the command to extract the value of field bizMsgIdr  which is eiifr000005229220231229162227.    { "shrhldrsIdDsclsrRspn": { "dsclsrRspnId": "0000537ede1c5e1084490000aa7eefab", "issrDsclsrReqRef": { "issrDsclsrReqId": "eiifr000005229220231229162227", "finInstrmId": { "isin": "FR0000052292" }, "shrhldrsDsclsrRcrdDt": { "dt": { "dt": "2023-12-29" } } }, "pgntn": { "lastPgInd": true, "pgNb": "1" }, "rspndgIntrmy": { "ctctPrsn": { "emailAdr": "ipb.asset.servicing@bnpparibas.com", "nm": "IPB ASSET SERVICING" }, "id": { "anyBIC": "BNPAGB22PBG" }, "nmAndAdr": { "adr": { "adrTp": 0, "bldgNb": "10", "ctry": "GB", "ctrySubDvsn": "LONDON", "pstCd": "NW16AA", "strtNm": "HAREWOOD AVENUE", "twnNm": "LONDON" }, "nm": "BNP PARIBAS PRIME BROKERAGE" } } } }
Hi Splunkers, I must recover Splunk version for all component in a particular environment. I have not access to all GUI and/or .conf files on all machine, so the idea is to try to recover those info... See more...
Hi Splunkers, I must recover Splunk version for all component in a particular environment. I have not access to all GUI and/or .conf files on all machine, so the idea is to try to recover those info with a Splunk search. Here: How-to-identify-a-list-of-forwarders-sending-data I got a very useful search that I used and return me a lot of info about Forwarders, all ones: UF, HF and so on. Due I'm not on a cloud env but an on prem one, I have also to recover Splunk version used on Indexers and Search Heads. So, my question is: how should I change search got on above link to gain version from IDXs and SHs?
I am wondering why the two following requests, when applied to exactly the same time range, return a different value: index=<my_index> logid=0000000013 | stats count index=<my_index> logid=13 | st... See more...
I am wondering why the two following requests, when applied to exactly the same time range, return a different value: index=<my_index> logid=0000000013 | stats count index=<my_index> logid=13 | stats count The first one returns many more results than the second. (The type indicated by Splunk for this field is "number" not "string".)