All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

 Creating additional indexed fields has its uses but it also has its drawbacks and very often speedup of your searches can be achieved in various different ways (summary indexing, report acceleration... See more...
 Creating additional indexed fields has its uses but it also has its drawbacks and very often speedup of your searches can be achieved in various different ways (summary indexing, report acceleration, datamodel acceleration). So defining indexed fields just to make search work faster may in many cases not be the best idea. Often it's enough to search your data properly to get a big increase of efficiency. Theoretically, you could define a calculated field using coalesce() and returning the indexed value if found (and Splunk migh be able to optimize the search properly) but due to how splunk search works, it might not in some cases give you big search speed improvement.
Splitting your lookup to include new fields "President", "VP", "Manager" would work but doesn't really scale if the role field has high cardinality. Here is another approach that is more scalable ... See more...
Splitting your lookup to include new fields "President", "VP", "Manager" would work but doesn't really scale if the role field has high cardinality. Here is another approach that is more scalable and can be generalized. You could make a net-new field in your lookup named role_json that would contain the mapping info of role<-->name.  Edit: Just realized your request was to not have results in the mvexpanded format I first showed. So here is an updated method.       <base_search> | lookup orgchart Org, Branch OUTPUT role_json | foreach mode=multivalue role_json [ | eval tmp_json=if( isnull(tmp_json), json_object(spath('<<ITEM>>', "Role"), spath('<<ITEM>>', "Name")), json_set(tmp_json, spath('<<ITEM>>', "Role"), spath('<<ITEM>>', "Name")) ) ] ``` capture any role_json values that are single values ``` | eval tmp_json=if( mvcount(role_json)==1, json_object(spath(role_json, "Role"), spath(role_json, "Name")), 'tmp_json' ) ``` remove role_json (no longer needed) ``` | fields - role_json ``` parse out tmp_json to table all the proper mappings ``` | spath input=tmp_json ``` remove tmp_json (no longer needed) ``` | fields - tmp_json          Directly after the lookup results would look something like this:   Then after the foreach loops. I added a few extra entries to lookup to demonstrate that this method is dynamic and doesn't need any hardcoded fieldnames to account for potential new values.   To get a new json object field into your existing lookup would look something like this (Provided its a CSV, if it is a kvstore then you would probably need to update the definition to include the new field)         | inputlookup orgchart | tojson str(Role) str(Name) output_field=role_json | outputlookup orgchart        
For example, I have location field containing AB, AC, AD. I need to sum these three locations and create a new location named AM05,  without replacing the existing AB, AC and AD. When searching for A... See more...
For example, I have location field containing AB, AC, AD. I need to sum these three locations and create a new location named AM05,  without replacing the existing AB, AC and AD. When searching for AM05, I want to see the added values, and when searching for AB, it should display the existing value !! @richgalloway 
Apart from MC or direct rest calls against your "main" components if you have forwarder monitoring enabled in your MC, you'll see a list of your forwarders which connected to your environment (the in... See more...
Apart from MC or direct rest calls against your "main" components if you have forwarder monitoring enabled in your MC, you'll see a list of your forwarders which connected to your environment (the inventory is dynamically updated based on the version reported by UF to _internal).
Hi @raysonjoberts, using a lookup as the one you described, please try something like this: | inputlookup <your_lookup> | stats values(eval(if(Role="President",Name,""))) AS President value... See more...
Hi @raysonjoberts, using a lookup as the one you described, please try something like this: | inputlookup <your_lookup> | stats values(eval(if(Role="President",Name,""))) AS President values(eval(if(Role="VP",Name,""))) AS VP values(eval(if(Role="Manager",Name,""))) AS Manager BY Org Branch Ciao. Giuseppe
That's due to how Splunk searches its indexes. Unless the field is indexed and properly configured or you're searching with wildcards, Splunk will try to find the exact value you're searching for in ... See more...
That's due to how Splunk searches its indexes. Unless the field is indexed and properly configured or you're searching with wildcards, Splunk will try to find the exact value you're searching for in its index files. For example. I'm searching my home environment for index="winevents" EventCode=7040 A fairly simple search. When I look into job inspect and get the job log I see how Splunk executes the search against its indexed data: 01-12-2024 18:06:20.986 INFO UnifiedSearch [1919978 searchOrchestrator] - Expanded index search = (index="winevents" EventCode=7040) 01-12-2024 18:06:20.986 INFO UnifiedSearch [1919978 searchOrchestrator] - base lispy: [ AND 7040 index::winevents ] As you can see, Splunk didn't optimize the search (because there wasn't much to optimze - the search was very simple) but the resulting lispy search is looking literally for the value "7040" with metadata field of index equal to winevents (actually index is treated a bit different than other indexed fields but for us here it doesn't matter). Only after finding those events that do have the "7040" anywhere within their body, Splunk will try to parse out the EventCode field from them and will try to match that value (possibly numerically) to your argument.  
I have a lookup table I am using to pull in contact information based on correlation of a couple of fields. The way the lookup table is formatted, it makes my results look different than what I want ... See more...
I have a lookup table I am using to pull in contact information based on correlation of a couple of fields. The way the lookup table is formatted, it makes my results look different than what I want to see. If I can consolidate the lookup table, it will fix my issue, but I can't figure out how to do it. The table currently looks like this: Org Branch Role Name Org A Branch 1 President Jack Org A Branch 1 VP Jill Org A Branch 1 Manager Mary Org A Branch 2 President Hansel Org A Branch 2 VP Gretel Org A Branch 3 VP Mickey Org A Branch 3 Manager Minnie   I use the Org and Branch as matching criteria and want to pull out the names for each role.  I do not want to see multivalue fields when I am done, the current search looks like: [base search] | lookup orgchart Org Branch OUTPUTNEW Role | mvexpand Role | lookup orgchart Org Branch Role OUTPUTNEW Name This works, but the mvexpand (obviously) creates a new line for each role and I do not want multiple lines for each in my final results.  I want a single line for every Org/Branch pair showing all the Roles and names.  I am thinking the way of solving this is reformatting the lookup table to look like the table below, then modifying my lookup.  Is there a way to "transpose" just the 2 fields?  [base search] | lookup orgchart Org Branch OUTPUTNEW President, VP, Manager Org Branch President VP Manager Org A Branch 1 Jack Jill Mary Org A Branch 2 Hansel Gretel   Org A Branch 3   MIckey Minnie     Thank you!
It's generally not the best idea to manipulate structured data with regexes if you can use the built-in functionality for handling the structure - like spath command or auto-kv functionality. Even if... See more...
It's generally not the best idea to manipulate structured data with regexes if you can use the built-in functionality for handling the structure - like spath command or auto-kv functionality. Even if your data is guaranteed to be simple (you will never have an array or subobject as value), you don't have to worry of finding proper field boundaries, escaping and so on.
By default (unless the limit is reconfigured) join uses only 50000 results from the subsearch to join with the outer search and runs for only 60 seconds to generate those results. If your subsearch e... See more...
By default (unless the limit is reconfigured) join uses only 50000 results from the subsearch to join with the outer search and runs for only 60 seconds to generate those results. If your subsearch exceeds 60 seconds of execution time or generates more than 50k results, it's silently finalized and only the results returned so far (up to 50k) are used for join. With other subsearch uses the limits can be lower - even down to 10k results. That's one of the reasons subsearches are best avoided - since they are silently finalized, you're not getting any feedback that you're not getting full results from your search and you're not aware that your final results might be incomplete or plain wrong.
If a subsearch has more than 50,000 events or takes longer than 1 minute (i think) to run it will auto-finalize. Occurrence of either of these scenarios will cause the data returned from the subsearc... See more...
If a subsearch has more than 50,000 events or takes longer than 1 minute (i think) to run it will auto-finalize. Occurrence of either of these scenarios will cause the data returned from the subsearch to be truncated and incomplete. BTW I don't think the search you shared needs to use a join/subsearch, something like this will probably do the same thing. index="aaam_devops_elasticsearch_idx" ((project="Einstein360_TicketsCreated_ElasticSearch_20210419" AND "source.TransactionName"="ITGTicketCreated") OR (project="Einstein360_TruckRollCreated_ElasticSearch_20210420" AND "source.TransactionName"="Truck_Roll_Create_Result")) | timechart span=1d dc(eval(case('project'=="Einstein360_TicketsCreated_ElasticSearch_20210419" AND 'source.TransactionName'=="ITGTicketCreated", id))) as ITGTicketCreated, dc(eval(case('project'=="Einstein360_TruckRollCreated_ElasticSearch_20210420" AND 'source.TransactionName'=="Truck_Roll_Create_Result", id))) as TruckRollCreated
I don't understand these new requirements.  If AB is filtered out then it cannot be searched.  You cannot search for AM05 since it doesn't exist until the appendpipe command runs. What is the final ... See more...
I don't understand these new requirements.  If AB is filtered out then it cannot be searched.  You cannot search for AM05 since it doesn't exist until the appendpipe command runs. What is the final result expected to look like?
As Ismo said, it might have worked fairly OK but the rules for splitting roles between different hosts actually make sense. They let you easier scale each component according to your needs, you can -... See more...
As Ismo said, it might have worked fairly OK but the rules for splitting roles between different hosts actually make sense. They let you easier scale each component according to your needs, you can - if needed - plan for HA/DR, you can plan migrations easier and you can easier debug stuff since separate roles don't interfere with each other.  
If this is json, like you already has, it's easier and better to use spath to extract those.  Based on your screenshot you should have this already on field Properties.appHdr.bizMsgIdr . In that cas... See more...
If this is json, like you already has, it's easier and better to use spath to extract those.  Based on your screenshot you should have this already on field Properties.appHdr.bizMsgIdr . In that case you can try e.g  ... |rename Properties.appHdr.bizMsgIdr as bizMsgIdr if you really need to rename/use short version. Another option is use ... | eval bizMsgIdr = Properties.appHdr.bizMsgIdr  
Hi Splunkers, I would like to export logs (raw/csv) out of Splunk cloud periodically to send it to gcp pub/sub. How can I achieve this. Appreciate ideas here.
do we have an update to the version of Mule supported?
Hi this is quite interesting findings What you have in original log/event data? Is it "logid=13" or just 13 and it has extracted and named with search or ingestion time? Is there any mater how ... See more...
Hi this is quite interesting findings What you have in original log/event data? Is it "logid=13" or just 13 and it has extracted and named with search or ingestion time? Is there any mater how many zeroes you add as prefix? r. Ismo
Hi There some ways to get that information. Which one is best depends on your environment. I suppose that you have MC configured and in use? If not please take it into use and utilise it with this.... See more...
Hi There some ways to get that information. Which one is best depends on your environment. I suppose that you have MC configured and in use? If not please take it into use and utilise it with this. With MC you can get this information directly from  Settings -> MC -> Instances Other option is use rest query for getting this from all instances, but this needs that those are defined as search peer for node where you are running this. In MC you have done this already | rest splunk_server=* /services/server/info f=host f=host_fqdn f=version 3rd option is try to get this information from _internal log, but it needs that you have extended it's retention time enough for finding last start messages. By default it's retention is too short for this. r. Ismo 
This is not working and no result in the column issrDsclsrReqId.  Is it possible to extract the value of  "bizMsgIdr": from the field Properties.appHdr ?  Splunk COmmand :  `macro_events_prod_s... See more...
This is not working and no result in the column issrDsclsrReqId.  Is it possible to extract the value of  "bizMsgIdr": from the field Properties.appHdr ?  Splunk COmmand :  `macro_events_prod_srt_shareholders_esa` sourcetype ="mscs:azure:eventhub" Name="Received Disclosure Response Command" "res1caf3c2ac2b3b6d180ff0001aa7eefab".   Result in the column Properties.appHdr :  { "fr": { "fiId": { "finInstnId": { "bicfi": "BNPAGB22PBG" } } }, "to": { "fiId": { "finInstnId": { "bicfi": "SICVFRPPEII" } } }, "bizMsgIdr": "res1caf3c2ac2b3b6d180ff0001aa7eefab", "msgDefIdr": "seev.047.001.02", "creDt": "2024-01-11T21:03:56.000Z" }    
Hi why not this way? | makeresults | eval _raw = "{ \"shrhldrsIdDsclsrRspn\": { \"dsclsrRspnId\": \"0000537ede1c5e1084490000aa7eefab\", \"issrDsclsrReqRef\": { \"issrDsclsrReqId\": \"eiifr00000522... See more...
Hi why not this way? | makeresults | eval _raw = "{ \"shrhldrsIdDsclsrRspn\": { \"dsclsrRspnId\": \"0000537ede1c5e1084490000aa7eefab\", \"issrDsclsrReqRef\": { \"issrDsclsrReqId\": \"eiifr000005229220231229162227\", \"finInstrmId\": { \"isin\": \"FR0000052292\" }, \"shrhldrsDsclsrRcrdDt\": { \"dt\": { \"dt\": \"2023-12-29\" } } }, \"pgntn\": { \"lastPgInd\": true, \"pgNb\": \"1\" }, \"rspndgIntrmy\": { \"ctctPrsn\": { \"emailAdr\": \"ipb.asset.servicing@bnpparibas.com\", \"nm\": \"IPB ASSET SERVICING\" }, \"id\": { \"anyBIC\": \"BNPAGB22PBG\" }, \"nmAndAdr\": { \"adr\": { \"adrTp\": 0, \"bldgNb\": \"10\", \"ctry\": \"GB\", \"ctrySubDvsn\": \"LONDON\", \"pstCd\": \"NW16AA\", \"strtNm\": \"HAREWOOD AVENUE\", \"twnNm\": \"LONDON\" }, \"nm\": \"BNP PARIBAS PRIME BROKERAGE\" } } } }" ``` generate test data``` | spath | table shrhldrsIdDsclsrRspn.issrDsclsrReqRef.issrDsclsrReqId If needed you can also use spath function with eval. If you really want to use rex then this should work | makeresults | eval _raw = "{ \"shrhldrsIdDsclsrRspn\": { \"dsclsrRspnId\": \"0000537ede1c5e1084490000aa7eefab\", \"issrDsclsrReqRef\": { \"issrDsclsrReqId\": \"eiifr000005229220231229162227\", \"finInstrmId\": { \"isin\": \"FR0000052292\" }, \"shrhldrsDsclsrRcrdDt\": { \"dt\": { \"dt\": \"2023-12-29\" } } }, \"pgntn\": { \"lastPgInd\": true, \"pgNb\": \"1\" }, \"rspndgIntrmy\": { \"ctctPrsn\": { \"emailAdr\": \"ipb.asset.servicing@bnpparibas.com\", \"nm\": \"IPB ASSET SERVICING\" }, \"id\": { \"anyBIC\": \"BNPAGB22PBG\" }, \"nmAndAdr\": { \"adr\": { \"adrTp\": 0, \"bldgNb\": \"10\", \"ctry\": \"GB\", \"ctrySubDvsn\": \"LONDON\", \"pstCd\": \"NW16AA\", \"strtNm\": \"HAREWOOD AVENUE\", \"twnNm\": \"LONDON\" }, \"nm\": \"BNP PARIBAS PRIME BROKERAGE\" } } } }" ``` generate test data``` | rex "\"issrDsclsrReqId\"\s*:\s*\"(?<issrDsclsrReqId>[^\"]+)\"" | table issrDsclsrReqId  r. Ismo
Hi probably you should look data models and how those can use for get the same or even better search time? https://docs.splunk.com/Documentation/CIM/5.3.1/User/Overview https://docs.splunk.com/Do... See more...
Hi probably you should look data models and how those can use for get the same or even better search time? https://docs.splunk.com/Documentation/CIM/5.3.1/User/Overview https://docs.splunk.com/Documentation/Splunk/9.1.2/Knowledge/Aboutdatamodels There is also some DM trainings on EDU site. r. Ismo