All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Small correction. If you don't define the cim_* macros their contents will of course be empty and while searching without using the accelerated summaries in your ad-hoc or scheduled searches it will ... See more...
Small correction. If you don't define the cim_* macros their contents will of course be empty and while searching without using the accelerated summaries in your ad-hoc or scheduled searches it will use indeed your user's role's default indexes, the datamodel acceleration summary building searches will be spawned with the system user's default indexes which is an empty list. You need to have explicitly defined list of indexes to have CIM acceleration built properly.
Hi @PickleRick, Thanks for your feedback, though I’m surprised with the answer, as I’ve seen other clear indication and solution to splitting JSON arrays to individual events like: How to parse a JS... See more...
Hi @PickleRick, Thanks for your feedback, though I’m surprised with the answer, as I’ve seen other clear indication and solution to splitting JSON arrays to individual events like: How to parse a JSON array delimited by "," into separate events with their unique timestamps? 
This question is confusing.  The data appears to be delimited by | yet the SPL uses ; as a delimiter. If the productName field starts after "productName=" and ends before the next | then this comman... See more...
This question is confusing.  The data appears to be delimited by | yet the SPL uses ; as a delimiter. If the productName field starts after "productName=" and ends before the next | then this command should extract it. | rex "productName=(?<productName>[^\|]+)"
TL&DR - you can't split events within Splunk itself during ingestion. Longer explanation - each event is processed as a single entity. You could try to do a copy of the event using CLONE_SOURCETYPE ... See more...
TL&DR - you can't split events within Splunk itself during ingestion. Longer explanation - each event is processed as a single entity. You could try to do a copy of the event using CLONE_SOURCETYPE and then process each of those instances separately (for example - cut some part from one copy but other part from another copy) but it's not something that can be reasonably implemented, it's unmaintaineable in the long run and you can't do it dynamically (like split a json into however many items an array has). Oh, and of course structured data manipulation in ingest time is a relatively big no-no. So your best bet would be to pre-process your data with a third-party tool. (or at least write a scripted input doing the heavy lifting of splitting the data).
Ensure the named lookup and the associated lookup file are included in the search bundle.  Double-check the permissions of each.
Presuming the Cribl worker is compatible with the Cloud component and hides any incompatibility from the forwarder, then, yes.
Never edit files in default directories. Especially in system/default. Splunk merges settings from various files into an effective config according to these rules https://docs.splunk.com/Documentat... See more...
Never edit files in default directories. Especially in system/default. Splunk merges settings from various files into an effective config according to these rules https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles Long story short - settings specified in local directory will overwrite setting specified in default one. So you can either add this setting to system/local/web.conf file (or create the file if you don't already have it). Of course you need to specify the proper stanza if you don't have it there. So the minimal file should look like this: [settings]tools.proxy.on = true Or even better - create your own app with this setting - create a directory within the apps directory, create a local directory there and put the web.conf  file there  
Hi All, I have this compressed (reduced version of large structure) which is a combination of basic text and JSON:   2024-07-10 07:27:28 +02:00 LiveEvent: {"data":{"time_span_seconds":300, "activ... See more...
Hi All, I have this compressed (reduced version of large structure) which is a combination of basic text and JSON:   2024-07-10 07:27:28 +02:00 LiveEvent: {"data":{"time_span_seconds":300, "active":17519, "total":17519, "unique":4208, "total_prepared":16684, "unique_prepared":3703, "created":594, "updated":0, "deleted":0,"ports":[ {"stock_id":49, "goods_in":0, "picks":2, "inspection_or_adhoc":0, "waste_time":1, "wait_bin":214, "wait_user":66, "stock_open_seconds":281, "stock_closed_seconds":19, "bins_above":0, "completed":[43757746,43756193], "content_codes":[], "category_codes":[{"category_code":4,"count":2}]}, {"stock_id":46, "goods_in":0, "picks":1, "inspection_or_adhoc":0, "waste_time":0, "wait_bin":2, "wait_user":298, "stock_open_seconds":300, "stock_closed_seconds":0, "bins_above":0, "completed":[43769715], "content_codes":[], "category_codes":[{"category_code":4,"count":1}]}, {"stock_id":1, "goods_in":0, "picks":3, "inspection_or_adhoc":0, "waste_time":0, "wait_bin":191, "wait_user":40, "stock_open_seconds":231, "stock_closed_seconds":69, "bins_above":0, "completed":[43823628,43823659,43823660], "content_codes":[], "category_codes":[{"category_code":1,"count":3}]} ]}, "uuid":"8711336c-ddcd-432f-b388-8b3940ce151a", "session_id":"d14fbee3-0a7a-4026-9fbf-d90eb62d0e73", "session_sequence_number":5113, "version":"2.0.0", "installation_id":"a031v00001Bex7fAAB", "local_installation_timestamp":"2024-07-10T07:35:00.0000000+02:00", "date":"2024-07-10", "app_server_timestamp":"2024-07-10T07:27:28.8839856+02:00", "event_type":"STOCK_AND_PILE"}   I eventually need each “stock_id” ending up as an individual event, and keep the common information along with it like: timestamp, uuid, session_id, session_sequence_number and event_type. Can someone guide me how to use props and transforms to achieve this? PS. I have read through several great posts on how to split JSON arrays into events, but none about how to keep common fields in each of them. Many thanks in advance. Best Regards, Bjarne
i was download log file in `/opt/splunk/var/log/splunk/web_service.log` and i open with Notepad++ like this When i search 500 ERROR it showed too much data, could you please give me specify keyw... See more...
i was download log file in `/opt/splunk/var/log/splunk/web_service.log` and i open with Notepad++ like this When i search 500 ERROR it showed too much data, could you please give me specify keyword? Because when i want to search macros it not show anything. Sorry very confuse about it   
Can you be a bit more specific? Which fields have "disappeared"? What does your SPL look like?
I just cheked on the  /system/default/web.conf there is the config that u mentioned before are commented.  It says that i have to set that in local/web.conf if I run my splunk behind the reverse pro... See more...
I just cheked on the  /system/default/web.conf there is the config that u mentioned before are commented.  It says that i have to set that in local/web.conf if I run my splunk behind the reverse proxy. Is that the correct location?  for the save way, do I have to copy that to /local first or I can just simply enable it?
I've written a Splunk Query and ran it, it's giving the result as expected but as soon as I click on "Create Table View" some of the field disappears which were earlier coming post the query run. Not... See more...
I've written a Splunk Query and ran it, it's giving the result as expected but as soon as I click on "Create Table View" some of the field disappears which were earlier coming post the query run. Not sure what is wrong, could anyone help?
JSON dashboard definition is for Studio not Classic. What is your question here (or does that already answer it!)?
Thank you @yuanliu  @jawahir007  Both of your solutions are working absolutely fine. @yuanliu  yes, index A always has larger number of hosts compared to index B. I would like to further expand this... See more...
Thank you @yuanliu  @jawahir007  Both of your solutions are working absolutely fine. @yuanliu  yes, index A always has larger number of hosts compared to index B. I would like to further expand this query to match the IP address aswell.  Can you provide some guidance around that. index A data  Hostname IP address OS xyz 190.1.1.1,  101.2.2.2, 102.3.3.3, 4.3.2.1 Windows zbc 100.0.1.0 Linux alb 190.1.0.2 Windows cgf 20.4.2.1 Windows bcn 20.5.3.4, 30.4.6.1 Solaris   Index B Hostname zbc 30.4.6.1 alb 101.2.2.2   Results Hostname IP address OS match xyz 190.1.1.1,  101.2.2.2, 102.3.3.3, 4.3.2.1 Windows ok(because IP address 101.2.2.2 is matching) zbc 100.0.1.0 Linux ok alb 190.1.0.2 Windows ok cgf 20.4.2.1 Windows missing(neither hostname is present nor the IP is matching) bcn 20.5.3.4, 30.4.6.1 Solaris yes(IP is matching) In my initial use case, I compared the hostnames in index A with those in index B. Now, I want to check if the hosts in index A are reporting their IP addresses in index B. If there’s a match, I will mark the corresponding hostname in index A as "ok."
Hello, guys! I'm trying to use the episodes table as the base search in the Edit Dashboard view, as well in the Dashboard Classic using the source, but here we already have the results in the table.... See more...
Hello, guys! I'm trying to use the episodes table as the base search in the Edit Dashboard view, as well in the Dashboard Classic using the source, but here we already have the results in the table. I'll attach my code snippet below:    { "dataSources": { "dsQueryCounterSearch1": { "options": { "query": "| where AlertSource = AWS and AlertSeverity IN (6,5,4,3,1) | dedup Identifier | stats count as AWS", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "mttrSearch": { "options": { "query": "| `itsi_event_management_get_mean_time(resolved)`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "episodesBySeveritySearch": { "options": { "query": "|`itsi_event_management_episode_by_severity`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "noiseReductionSearch": { "options": { "query": "| `itsi_event_management_noise_reduction`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "percentAckSearch": { "options": { "query": "| `itsi_event_management_get_episode_count(acknowledged)` | eval acknowledgedPercent=(Acknowledged/total)*100 | table acknowledgedPercent", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "mttaSearch": { "options": { "query": "| `itsi_event_management_get_mean_time(acknowledged)`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" } }, "visualizations": { "vizQueryCounterSearch1": { "title": "Query Counter 1", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0 }, "dataSources": { "primary": "dsQueryCounterSearch1" } }, "episodesBySeverity": { "title": "Episodes by Severity", "type": "splunk.bar", "options": { "backgroundColor": "#ffffff", "barSpacing": 5, "dataValuesDisplay": "all", "legendDisplay": "off", "showYMajorGridLines": false, "yAxisLabelVisibility": "hide", "xAxisMajorTickVisibility": "hide", "yAxisMajorTickVisibility": "hide", "xAxisTitleVisibility": "hide", "yAxisTitleVisibility": "hide" }, "dataSources": { "primary": "episodesBySeveritySearch" } }, "noiseReduction": { "title": "Total Noise Reduction", "type": "splunk.singlevalue", "options": { "backgroundColor": "> majorValue | rangeValue(backgroundColorThresholds)", "numberPrecision": 2, "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "%" }, "context": { "backgroundColorThresholds": [ { "from": 95, "value": "#65a637" }, { "from": 90, "to": 95, "value": "#6db7c6" }, { "from": 87, "to": 90, "value": "#f7bc38" }, { "from": 85, "to": 87, "value": "#f58f39" }, { "to": 85, "value": "#d93f3c" } ] }, "dataSources": { "primary": "noiseReductionSearch" } }, "percentAck": { "title": "Episodes Acknowledged", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "numberPrecision": 2, "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "%" }, "dataSources": { "primary": "percentAckSearch" } }, "mtta": { "title": "Mean Time to Acknowledged", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "minutes" }, "dataSources": { "primary": "mttaSearch" } } }, "layout": { "type": "grid", "options": { "display": "auto-scale", "height": 240, "width": 1440 }, "structure": [ { "item": "vizQueryCounterSearch1", "type": "block", "position": { "x": 0, "y": 80, "w": 288, "h": 220 } }, { "item": "episodesBySeverity", "type": "block", "position": { "x": 288, "y": 80, "w": 288, "h": 220 } }, { "item": "noiseReduction", "type": "block", "position": { "x": 576, "y": 80, "w": 288, "h": 220 } }, { "item": "percentAck", "type": "block", "position": { "x": 864, "y": 80, "w": 288, "h": 220 } }, { "item": "mtta", "type": "block", "position": { "x": 1152, "y": 80, "w": 288, "h": 220 } } ] } }       I really appreciate your help, have a great day
hello all can help me for this? i get data like this abc=1|productName= SHAMPTS JODAC RL MTV 36X(4X60G);ABC MANIS RL 12X720G;SO KLIN ROSE FRESH LIQ 24X200ML|field23=tip  i want to extract produ... See more...
hello all can help me for this? i get data like this abc=1|productName= SHAMPTS JODAC RL MTV 36X(4X60G);ABC MANIS RL 12X720G;SO KLIN ROSE FRESH LIQ 24X200ML|field23=tip  i want to extract productName but can't extract because value productName not using " " so I'm confused to extract it, I've tried it using the spl command | makemv delim=";" productName but the only result is SHAMPTS JODAC RL MTV 36X(4X60G). the rest doesn't appear. and also using regex with the command | makemv tokenizer="(([[:alnum:]]+ )+([[:word:]]+))" productName but the result is still the same. so is there any suggestion so that the value after ; can be extracted?
Solution : upgrading (therefore reinstalling ES) again to ES 7.3.2 solved the issue.
Hello, in clustered environment or standalone, after upgrading first Splunk core then Splunk ES, incident review not working anymore, not showing any notable. The macro `notable` is in error and we... See more...
Hello, in clustered environment or standalone, after upgrading first Splunk core then Splunk ES, incident review not working anymore, not showing any notable. The macro `notable` is in error and we can see SA-utils python errors in log files.
Think layers. HTTP vs. HTTPS is something that happens before even any HTTP request is being sent so it's enabled on a whole network port level and all HEC tokens are serviced by either HTTP or HTTPS... See more...
Think layers. HTTP vs. HTTPS is something that happens before even any HTTP request is being sent so it's enabled on a whole network port level and all HEC tokens are serviced by either HTTP or HTTPS input. Whether HTTP/HTTPS issue is important for you security-wise depends on your approach to the data you're ingesting - is it highly confidential and anyone eavesdropping into it on the wire is a great concern to you or not. While Splunk states that switching from HTTPS to HTTP can give a significant performance boost I'd be cautious with such general statements. It does depend on the hardware you're using and the volume of data you're processing. If you have a fairly modern server or a properly specced VM and you're not processing some humongous amounts of data you should be fairly ok with HTTPS enabled.
1. Enable audit creation on your database system. It's different in each RDBMS so you have to work with your DB admin on that. 2. Collect the log - as far as I remember, the MSSQL stores audit logs ... See more...
1. Enable audit creation on your database system. It's different in each RDBMS so you have to work with your DB admin on that. 2. Collect the log - as far as I remember, the MSSQL stores audit logs in a separate database so you have to use dbconnect to pull those entries from the database. MySQL I think simply writes audit to a flat text log file so you'll have to set up a file monitor input.