All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for the response!!
That one relies on the fact that it was a simple array and could be cut with regexes into pieces. The splitting mechanism would break apart if the data changed - for example if there was another fiel... See more...
That one relies on the fact that it was a simple array and could be cut with regexes into pieces. The splitting mechanism would break apart if the data changed - for example if there was another field added except the "local" one to the "outer" json.
I have 2 indexes - index_1 and index_2 index_1 has the following fields index1Id currEventId prevEventId index_2 has the following fields index2Id eventId eventOrigin currEventId and prevEve... See more...
I have 2 indexes - index_1 and index_2 index_1 has the following fields index1Id currEventId prevEventId index_2 has the following fields index2Id eventId eventOrigin currEventId and prevEventId  in index_1 will have the same values as that of eventId of index_2 Now, I am trying to create the table of the following format index1Id prevEventId prevEventOrigin currEventId currEventOrigin   I tried the joins with the below query, but I see that the columns 3 and 5 are mostly blank. So, I am not sure what is wrong with the query.       index="index_1" | join type=left currEventId [ search index="index_2" | rename eventId as currEventId, eventOrigin as currEventOrigin | fields currEventId, currEventOrigin] | join type=left prevEventId [ search index="index_2" | rename eventId as prevEventId, eventOrigin as prevEventOrigin | fields prevEventId, prevEventOrigin] | table index1Id, prevEventOrigin, currEventOrigin, prevEventId, currEventId         And based on the online suggestions, I am trying the following approach, but couldn't complete it (works fine by populating all the columns)       (index="index_1") OR (index="index_2") | eval joiner=if(index="index_1", prevEventId, eventId) | stats values(*) as * by joiner | where prevEventId=eventId | rename eventOrigin AS previousEventOrigin, eventId as previousEventId | table index1Id, previousEventId, previousEventOrigin     Please let me know an efficient way to achieve the solution. Thanks   
Hi all, I was wanting to get an understanding on what the minimum permissions available to enable the log flow between GitHub and Splunk cloud, as going by the documentation for the app, the account... See more...
Hi all, I was wanting to get an understanding on what the minimum permissions available to enable the log flow between GitHub and Splunk cloud, as going by the documentation for the app, the account used to pull in the logs requires : admin:enterprise Full control of enterprises manage_billing:enterprise Read and write enterprise billing data read:enterprise Read enterprise profile data Can we reduce the amount of high privileged permissions required for the integration ?
I'm not sure it can be done reliably using props and transforms.  I'd use a scripted input to parse the data and re-format it.
And btw this one: How to split JSON array into Multiple events at Index Time? 
Small correction. If you don't define the cim_* macros their contents will of course be empty and while searching without using the accelerated summaries in your ad-hoc or scheduled searches it will ... See more...
Small correction. If you don't define the cim_* macros their contents will of course be empty and while searching without using the accelerated summaries in your ad-hoc or scheduled searches it will use indeed your user's role's default indexes, the datamodel acceleration summary building searches will be spawned with the system user's default indexes which is an empty list. You need to have explicitly defined list of indexes to have CIM acceleration built properly.
Hi @PickleRick, Thanks for your feedback, though I’m surprised with the answer, as I’ve seen other clear indication and solution to splitting JSON arrays to individual events like: How to parse a JS... See more...
Hi @PickleRick, Thanks for your feedback, though I’m surprised with the answer, as I’ve seen other clear indication and solution to splitting JSON arrays to individual events like: How to parse a JSON array delimited by "," into separate events with their unique timestamps? 
This question is confusing.  The data appears to be delimited by | yet the SPL uses ; as a delimiter. If the productName field starts after "productName=" and ends before the next | then this comman... See more...
This question is confusing.  The data appears to be delimited by | yet the SPL uses ; as a delimiter. If the productName field starts after "productName=" and ends before the next | then this command should extract it. | rex "productName=(?<productName>[^\|]+)"
TL&DR - you can't split events within Splunk itself during ingestion. Longer explanation - each event is processed as a single entity. You could try to do a copy of the event using CLONE_SOURCETYPE ... See more...
TL&DR - you can't split events within Splunk itself during ingestion. Longer explanation - each event is processed as a single entity. You could try to do a copy of the event using CLONE_SOURCETYPE and then process each of those instances separately (for example - cut some part from one copy but other part from another copy) but it's not something that can be reasonably implemented, it's unmaintaineable in the long run and you can't do it dynamically (like split a json into however many items an array has). Oh, and of course structured data manipulation in ingest time is a relatively big no-no. So your best bet would be to pre-process your data with a third-party tool. (or at least write a scripted input doing the heavy lifting of splitting the data).
Ensure the named lookup and the associated lookup file are included in the search bundle.  Double-check the permissions of each.
Presuming the Cribl worker is compatible with the Cloud component and hides any incompatibility from the forwarder, then, yes.
Never edit files in default directories. Especially in system/default. Splunk merges settings from various files into an effective config according to these rules https://docs.splunk.com/Documentat... See more...
Never edit files in default directories. Especially in system/default. Splunk merges settings from various files into an effective config according to these rules https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles Long story short - settings specified in local directory will overwrite setting specified in default one. So you can either add this setting to system/local/web.conf file (or create the file if you don't already have it). Of course you need to specify the proper stanza if you don't have it there. So the minimal file should look like this: [settings]tools.proxy.on = true Or even better - create your own app with this setting - create a directory within the apps directory, create a local directory there and put the web.conf  file there  
Hi All, I have this compressed (reduced version of large structure) which is a combination of basic text and JSON:   2024-07-10 07:27:28 +02:00 LiveEvent: {"data":{"time_span_seconds":300, "activ... See more...
Hi All, I have this compressed (reduced version of large structure) which is a combination of basic text and JSON:   2024-07-10 07:27:28 +02:00 LiveEvent: {"data":{"time_span_seconds":300, "active":17519, "total":17519, "unique":4208, "total_prepared":16684, "unique_prepared":3703, "created":594, "updated":0, "deleted":0,"ports":[ {"stock_id":49, "goods_in":0, "picks":2, "inspection_or_adhoc":0, "waste_time":1, "wait_bin":214, "wait_user":66, "stock_open_seconds":281, "stock_closed_seconds":19, "bins_above":0, "completed":[43757746,43756193], "content_codes":[], "category_codes":[{"category_code":4,"count":2}]}, {"stock_id":46, "goods_in":0, "picks":1, "inspection_or_adhoc":0, "waste_time":0, "wait_bin":2, "wait_user":298, "stock_open_seconds":300, "stock_closed_seconds":0, "bins_above":0, "completed":[43769715], "content_codes":[], "category_codes":[{"category_code":4,"count":1}]}, {"stock_id":1, "goods_in":0, "picks":3, "inspection_or_adhoc":0, "waste_time":0, "wait_bin":191, "wait_user":40, "stock_open_seconds":231, "stock_closed_seconds":69, "bins_above":0, "completed":[43823628,43823659,43823660], "content_codes":[], "category_codes":[{"category_code":1,"count":3}]} ]}, "uuid":"8711336c-ddcd-432f-b388-8b3940ce151a", "session_id":"d14fbee3-0a7a-4026-9fbf-d90eb62d0e73", "session_sequence_number":5113, "version":"2.0.0", "installation_id":"a031v00001Bex7fAAB", "local_installation_timestamp":"2024-07-10T07:35:00.0000000+02:00", "date":"2024-07-10", "app_server_timestamp":"2024-07-10T07:27:28.8839856+02:00", "event_type":"STOCK_AND_PILE"}   I eventually need each “stock_id” ending up as an individual event, and keep the common information along with it like: timestamp, uuid, session_id, session_sequence_number and event_type. Can someone guide me how to use props and transforms to achieve this? PS. I have read through several great posts on how to split JSON arrays into events, but none about how to keep common fields in each of them. Many thanks in advance. Best Regards, Bjarne
i was download log file in `/opt/splunk/var/log/splunk/web_service.log` and i open with Notepad++ like this When i search 500 ERROR it showed too much data, could you please give me specify keyw... See more...
i was download log file in `/opt/splunk/var/log/splunk/web_service.log` and i open with Notepad++ like this When i search 500 ERROR it showed too much data, could you please give me specify keyword? Because when i want to search macros it not show anything. Sorry very confuse about it   
Can you be a bit more specific? Which fields have "disappeared"? What does your SPL look like?
I just cheked on the  /system/default/web.conf there is the config that u mentioned before are commented.  It says that i have to set that in local/web.conf if I run my splunk behind the reverse pro... See more...
I just cheked on the  /system/default/web.conf there is the config that u mentioned before are commented.  It says that i have to set that in local/web.conf if I run my splunk behind the reverse proxy. Is that the correct location?  for the save way, do I have to copy that to /local first or I can just simply enable it?
I've written a Splunk Query and ran it, it's giving the result as expected but as soon as I click on "Create Table View" some of the field disappears which were earlier coming post the query run. Not... See more...
I've written a Splunk Query and ran it, it's giving the result as expected but as soon as I click on "Create Table View" some of the field disappears which were earlier coming post the query run. Not sure what is wrong, could anyone help?
JSON dashboard definition is for Studio not Classic. What is your question here (or does that already answer it!)?
Thank you @yuanliu  @jawahir007  Both of your solutions are working absolutely fine. @yuanliu  yes, index A always has larger number of hosts compared to index B. I would like to further expand this... See more...
Thank you @yuanliu  @jawahir007  Both of your solutions are working absolutely fine. @yuanliu  yes, index A always has larger number of hosts compared to index B. I would like to further expand this query to match the IP address aswell.  Can you provide some guidance around that. index A data  Hostname IP address OS xyz 190.1.1.1,  101.2.2.2, 102.3.3.3, 4.3.2.1 Windows zbc 100.0.1.0 Linux alb 190.1.0.2 Windows cgf 20.4.2.1 Windows bcn 20.5.3.4, 30.4.6.1 Solaris   Index B Hostname zbc 30.4.6.1 alb 101.2.2.2   Results Hostname IP address OS match xyz 190.1.1.1,  101.2.2.2, 102.3.3.3, 4.3.2.1 Windows ok(because IP address 101.2.2.2 is matching) zbc 100.0.1.0 Linux ok alb 190.1.0.2 Windows ok cgf 20.4.2.1 Windows missing(neither hostname is present nor the IP is matching) bcn 20.5.3.4, 30.4.6.1 Solaris yes(IP is matching) In my initial use case, I compared the hostnames in index A with those in index B. Now, I want to check if the hosts in index A are reporting their IP addresses in index B. If there’s a match, I will mark the corresponding hostname in index A as "ok."