All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Your case is completely different because you want to keep some of the "outer" information shared between separate events (which actually isn't that good idea because your license usage will get mult... See more...
Your case is completely different because you want to keep some of the "outer" information shared between separate events (which actually isn't that good idea because your license usage will get multiplied on those events). As for the scripted input - see those resources for technicalities from Splunk side. Of course the internals - splitting the event - is entirely up to you. https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/ScriptSetup https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs  
Hi @PickleRick, The JSON structure is very solid, and don’t change, except there can be many (+1000) or few (4) “stock_id”. You talked about scripting inputs as well, do you have any suggestion... See more...
Hi @PickleRick, The JSON structure is very solid, and don’t change, except there can be many (+1000) or few (4) “stock_id”. You talked about scripting inputs as well, do you have any suggestions/examples?
Hi @Gravoc , maybe you created the lookup in a different app and didn't add the Global sharing level to the lookup and to the definition. Instead the ES lookups are shared at Global level, probably... See more...
Hi @Gravoc , maybe you created the lookup in a different app and didn't add the Global sharing level to the lookup and to the definition. Instead the ES lookups are shared at Global level, probably for this reason it runs. Try to share as Global lookup and dedinition. Ciao. Giuseppe
Hi @richgalloway, Thanks for your input. Do you happen to have any scripting ideas for this?
Thank you for the response!!
That one relies on the fact that it was a simple array and could be cut with regexes into pieces. The splitting mechanism would break apart if the data changed - for example if there was another fiel... See more...
That one relies on the fact that it was a simple array and could be cut with regexes into pieces. The splitting mechanism would break apart if the data changed - for example if there was another field added except the "local" one to the "outer" json.
I have 2 indexes - index_1 and index_2 index_1 has the following fields index1Id currEventId prevEventId index_2 has the following fields index2Id eventId eventOrigin currEventId and prevEve... See more...
I have 2 indexes - index_1 and index_2 index_1 has the following fields index1Id currEventId prevEventId index_2 has the following fields index2Id eventId eventOrigin currEventId and prevEventId  in index_1 will have the same values as that of eventId of index_2 Now, I am trying to create the table of the following format index1Id prevEventId prevEventOrigin currEventId currEventOrigin   I tried the joins with the below query, but I see that the columns 3 and 5 are mostly blank. So, I am not sure what is wrong with the query.       index="index_1" | join type=left currEventId [ search index="index_2" | rename eventId as currEventId, eventOrigin as currEventOrigin | fields currEventId, currEventOrigin] | join type=left prevEventId [ search index="index_2" | rename eventId as prevEventId, eventOrigin as prevEventOrigin | fields prevEventId, prevEventOrigin] | table index1Id, prevEventOrigin, currEventOrigin, prevEventId, currEventId         And based on the online suggestions, I am trying the following approach, but couldn't complete it (works fine by populating all the columns)       (index="index_1") OR (index="index_2") | eval joiner=if(index="index_1", prevEventId, eventId) | stats values(*) as * by joiner | where prevEventId=eventId | rename eventOrigin AS previousEventOrigin, eventId as previousEventId | table index1Id, previousEventId, previousEventOrigin     Please let me know an efficient way to achieve the solution. Thanks   
Hi all, I was wanting to get an understanding on what the minimum permissions available to enable the log flow between GitHub and Splunk cloud, as going by the documentation for the app, the account... See more...
Hi all, I was wanting to get an understanding on what the minimum permissions available to enable the log flow between GitHub and Splunk cloud, as going by the documentation for the app, the account used to pull in the logs requires : admin:enterprise Full control of enterprises manage_billing:enterprise Read and write enterprise billing data read:enterprise Read enterprise profile data Can we reduce the amount of high privileged permissions required for the integration ?
I'm not sure it can be done reliably using props and transforms.  I'd use a scripted input to parse the data and re-format it.
And btw this one: How to split JSON array into Multiple events at Index Time? 
Small correction. If you don't define the cim_* macros their contents will of course be empty and while searching without using the accelerated summaries in your ad-hoc or scheduled searches it will ... See more...
Small correction. If you don't define the cim_* macros their contents will of course be empty and while searching without using the accelerated summaries in your ad-hoc or scheduled searches it will use indeed your user's role's default indexes, the datamodel acceleration summary building searches will be spawned with the system user's default indexes which is an empty list. You need to have explicitly defined list of indexes to have CIM acceleration built properly.
Hi @PickleRick, Thanks for your feedback, though I’m surprised with the answer, as I’ve seen other clear indication and solution to splitting JSON arrays to individual events like: How to parse a JS... See more...
Hi @PickleRick, Thanks for your feedback, though I’m surprised with the answer, as I’ve seen other clear indication and solution to splitting JSON arrays to individual events like: How to parse a JSON array delimited by "," into separate events with their unique timestamps? 
This question is confusing.  The data appears to be delimited by | yet the SPL uses ; as a delimiter. If the productName field starts after "productName=" and ends before the next | then this comman... See more...
This question is confusing.  The data appears to be delimited by | yet the SPL uses ; as a delimiter. If the productName field starts after "productName=" and ends before the next | then this command should extract it. | rex "productName=(?<productName>[^\|]+)"
TL&DR - you can't split events within Splunk itself during ingestion. Longer explanation - each event is processed as a single entity. You could try to do a copy of the event using CLONE_SOURCETYPE ... See more...
TL&DR - you can't split events within Splunk itself during ingestion. Longer explanation - each event is processed as a single entity. You could try to do a copy of the event using CLONE_SOURCETYPE and then process each of those instances separately (for example - cut some part from one copy but other part from another copy) but it's not something that can be reasonably implemented, it's unmaintaineable in the long run and you can't do it dynamically (like split a json into however many items an array has). Oh, and of course structured data manipulation in ingest time is a relatively big no-no. So your best bet would be to pre-process your data with a third-party tool. (or at least write a scripted input doing the heavy lifting of splitting the data).
Ensure the named lookup and the associated lookup file are included in the search bundle.  Double-check the permissions of each.
Presuming the Cribl worker is compatible with the Cloud component and hides any incompatibility from the forwarder, then, yes.
Never edit files in default directories. Especially in system/default. Splunk merges settings from various files into an effective config according to these rules https://docs.splunk.com/Documentat... See more...
Never edit files in default directories. Especially in system/default. Splunk merges settings from various files into an effective config according to these rules https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles Long story short - settings specified in local directory will overwrite setting specified in default one. So you can either add this setting to system/local/web.conf file (or create the file if you don't already have it). Of course you need to specify the proper stanza if you don't have it there. So the minimal file should look like this: [settings]tools.proxy.on = true Or even better - create your own app with this setting - create a directory within the apps directory, create a local directory there and put the web.conf  file there  
Hi All, I have this compressed (reduced version of large structure) which is a combination of basic text and JSON:   2024-07-10 07:27:28 +02:00 LiveEvent: {"data":{"time_span_seconds":300, "activ... See more...
Hi All, I have this compressed (reduced version of large structure) which is a combination of basic text and JSON:   2024-07-10 07:27:28 +02:00 LiveEvent: {"data":{"time_span_seconds":300, "active":17519, "total":17519, "unique":4208, "total_prepared":16684, "unique_prepared":3703, "created":594, "updated":0, "deleted":0,"ports":[ {"stock_id":49, "goods_in":0, "picks":2, "inspection_or_adhoc":0, "waste_time":1, "wait_bin":214, "wait_user":66, "stock_open_seconds":281, "stock_closed_seconds":19, "bins_above":0, "completed":[43757746,43756193], "content_codes":[], "category_codes":[{"category_code":4,"count":2}]}, {"stock_id":46, "goods_in":0, "picks":1, "inspection_or_adhoc":0, "waste_time":0, "wait_bin":2, "wait_user":298, "stock_open_seconds":300, "stock_closed_seconds":0, "bins_above":0, "completed":[43769715], "content_codes":[], "category_codes":[{"category_code":4,"count":1}]}, {"stock_id":1, "goods_in":0, "picks":3, "inspection_or_adhoc":0, "waste_time":0, "wait_bin":191, "wait_user":40, "stock_open_seconds":231, "stock_closed_seconds":69, "bins_above":0, "completed":[43823628,43823659,43823660], "content_codes":[], "category_codes":[{"category_code":1,"count":3}]} ]}, "uuid":"8711336c-ddcd-432f-b388-8b3940ce151a", "session_id":"d14fbee3-0a7a-4026-9fbf-d90eb62d0e73", "session_sequence_number":5113, "version":"2.0.0", "installation_id":"a031v00001Bex7fAAB", "local_installation_timestamp":"2024-07-10T07:35:00.0000000+02:00", "date":"2024-07-10", "app_server_timestamp":"2024-07-10T07:27:28.8839856+02:00", "event_type":"STOCK_AND_PILE"}   I eventually need each “stock_id” ending up as an individual event, and keep the common information along with it like: timestamp, uuid, session_id, session_sequence_number and event_type. Can someone guide me how to use props and transforms to achieve this? PS. I have read through several great posts on how to split JSON arrays into events, but none about how to keep common fields in each of them. Many thanks in advance. Best Regards, Bjarne
i was download log file in `/opt/splunk/var/log/splunk/web_service.log` and i open with Notepad++ like this When i search 500 ERROR it showed too much data, could you please give me specify keyw... See more...
i was download log file in `/opt/splunk/var/log/splunk/web_service.log` and i open with Notepad++ like this When i search 500 ERROR it showed too much data, could you please give me specify keyword? Because when i want to search macros it not show anything. Sorry very confuse about it   
Can you be a bit more specific? Which fields have "disappeared"? What does your SPL look like?