All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We are using DB Connect to collect logs from Oracle databases. We are using a rising mode input which requires the database statement be written in a way where column used for checkpointing... See more...
Hello, We are using DB Connect to collect logs from Oracle databases. We are using a rising mode input which requires the database statement be written in a way where column used for checkpointing is compared against a "?". Splunk DB Connect fills in the "?" with the check point value. Occasionally we will get "ORA-01843: Not a Valid Month errors" on inputs. The error itself is understood.* The question is, how do we rewrite the query to avoid this, when Splunk/DB Connect is handling how the "?" in the query  is replaced? Here is an example query: SELECT ACTION_NAME, CAST((EVENT_TIMESTAMP at TIME zone America/New_York) AS TIMESTAMP) extended_timestamp_est FROM AUDSYS.UNIFIED_AUDIT_TRAIL WHERE event_timestamp > ? ORDER BY EVENT_TIMESTAMP asc; How can we format the timestamp in the "?" in a way that the database understands and meets the DB Connect rising input requirement? Thank you! *(Our understanding is that it means that the timestamp/time format in the query is not understood by the database. The fact that it happens only occasionally means there is probably some offending row within the results set.)
I have 2 indexes - index_1 and index_2 index_1 has the following fields index1Id currEventId prevEventId index_2 has the following fields index2Id eventId eventOrigin currEventId and prevEve... See more...
I have 2 indexes - index_1 and index_2 index_1 has the following fields index1Id currEventId prevEventId index_2 has the following fields index2Id eventId eventOrigin currEventId and prevEventId  in index_1 will have the same values as that of eventId of index_2 Now, I am trying to create the table of the following format index1Id prevEventId prevEventOrigin currEventId currEventOrigin   I tried the joins with the below query, but I see that the columns 3 and 5 are mostly blank. So, I am not sure what is wrong with the query.       index="index_1" | join type=left currEventId [ search index="index_2" | rename eventId as currEventId, eventOrigin as currEventOrigin | fields currEventId, currEventOrigin] | join type=left prevEventId [ search index="index_2" | rename eventId as prevEventId, eventOrigin as prevEventOrigin | fields prevEventId, prevEventOrigin] | table index1Id, prevEventOrigin, currEventOrigin, prevEventId, currEventId         And based on the online suggestions, I am trying the following approach, but couldn't complete it (works fine by populating all the columns)       (index="index_1") OR (index="index_2") | eval joiner=if(index="index_1", prevEventId, eventId) | stats values(*) as * by joiner | where prevEventId=eventId | rename eventOrigin AS previousEventOrigin, eventId as previousEventId | table index1Id, previousEventId, previousEventOrigin     Please let me know an efficient way to achieve the solution. Thanks   
Hi all, I was wanting to get an understanding on what the minimum permissions available to enable the log flow between GitHub and Splunk cloud, as going by the documentation for the app, the account... See more...
Hi all, I was wanting to get an understanding on what the minimum permissions available to enable the log flow between GitHub and Splunk cloud, as going by the documentation for the app, the account used to pull in the logs requires : admin:enterprise Full control of enterprises manage_billing:enterprise Read and write enterprise billing data read:enterprise Read enterprise profile data Can we reduce the amount of high privileged permissions required for the integration ?
Hi All, I have this compressed (reduced version of large structure) which is a combination of basic text and JSON:   2024-07-10 07:27:28 +02:00 LiveEvent: {"data":{"time_span_seconds":300, "activ... See more...
Hi All, I have this compressed (reduced version of large structure) which is a combination of basic text and JSON:   2024-07-10 07:27:28 +02:00 LiveEvent: {"data":{"time_span_seconds":300, "active":17519, "total":17519, "unique":4208, "total_prepared":16684, "unique_prepared":3703, "created":594, "updated":0, "deleted":0,"ports":[ {"stock_id":49, "goods_in":0, "picks":2, "inspection_or_adhoc":0, "waste_time":1, "wait_bin":214, "wait_user":66, "stock_open_seconds":281, "stock_closed_seconds":19, "bins_above":0, "completed":[43757746,43756193], "content_codes":[], "category_codes":[{"category_code":4,"count":2}]}, {"stock_id":46, "goods_in":0, "picks":1, "inspection_or_adhoc":0, "waste_time":0, "wait_bin":2, "wait_user":298, "stock_open_seconds":300, "stock_closed_seconds":0, "bins_above":0, "completed":[43769715], "content_codes":[], "category_codes":[{"category_code":4,"count":1}]}, {"stock_id":1, "goods_in":0, "picks":3, "inspection_or_adhoc":0, "waste_time":0, "wait_bin":191, "wait_user":40, "stock_open_seconds":231, "stock_closed_seconds":69, "bins_above":0, "completed":[43823628,43823659,43823660], "content_codes":[], "category_codes":[{"category_code":1,"count":3}]} ]}, "uuid":"8711336c-ddcd-432f-b388-8b3940ce151a", "session_id":"d14fbee3-0a7a-4026-9fbf-d90eb62d0e73", "session_sequence_number":5113, "version":"2.0.0", "installation_id":"a031v00001Bex7fAAB", "local_installation_timestamp":"2024-07-10T07:35:00.0000000+02:00", "date":"2024-07-10", "app_server_timestamp":"2024-07-10T07:27:28.8839856+02:00", "event_type":"STOCK_AND_PILE"}   I eventually need each “stock_id” ending up as an individual event, and keep the common information along with it like: timestamp, uuid, session_id, session_sequence_number and event_type. Can someone guide me how to use props and transforms to achieve this? PS. I have read through several great posts on how to split JSON arrays into events, but none about how to keep common fields in each of them. Many thanks in advance. Best Regards, Bjarne
I've written a Splunk Query and ran it, it's giving the result as expected but as soon as I click on "Create Table View" some of the field disappears which were earlier coming post the query run. Not... See more...
I've written a Splunk Query and ran it, it's giving the result as expected but as soon as I click on "Create Table View" some of the field disappears which were earlier coming post the query run. Not sure what is wrong, could anyone help?
Hello, guys! I'm trying to use the episodes table as the base search in the Edit Dashboard view, as well in the Dashboard Classic using the source, but here we already have the results in the table.... See more...
Hello, guys! I'm trying to use the episodes table as the base search in the Edit Dashboard view, as well in the Dashboard Classic using the source, but here we already have the results in the table. I'll attach my code snippet below:    { "dataSources": { "dsQueryCounterSearch1": { "options": { "query": "| where AlertSource = AWS and AlertSeverity IN (6,5,4,3,1) | dedup Identifier | stats count as AWS", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "mttrSearch": { "options": { "query": "| `itsi_event_management_get_mean_time(resolved)`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "episodesBySeveritySearch": { "options": { "query": "|`itsi_event_management_episode_by_severity`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "noiseReductionSearch": { "options": { "query": "| `itsi_event_management_noise_reduction`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "percentAckSearch": { "options": { "query": "| `itsi_event_management_get_episode_count(acknowledged)` | eval acknowledgedPercent=(Acknowledged/total)*100 | table acknowledgedPercent", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, "mttaSearch": { "options": { "query": "| `itsi_event_management_get_mean_time(acknowledged)`", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" } }, "visualizations": { "vizQueryCounterSearch1": { "title": "Query Counter 1", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0 }, "dataSources": { "primary": "dsQueryCounterSearch1" } }, "episodesBySeverity": { "title": "Episodes by Severity", "type": "splunk.bar", "options": { "backgroundColor": "#ffffff", "barSpacing": 5, "dataValuesDisplay": "all", "legendDisplay": "off", "showYMajorGridLines": false, "yAxisLabelVisibility": "hide", "xAxisMajorTickVisibility": "hide", "yAxisMajorTickVisibility": "hide", "xAxisTitleVisibility": "hide", "yAxisTitleVisibility": "hide" }, "dataSources": { "primary": "episodesBySeveritySearch" } }, "noiseReduction": { "title": "Total Noise Reduction", "type": "splunk.singlevalue", "options": { "backgroundColor": "> majorValue | rangeValue(backgroundColorThresholds)", "numberPrecision": 2, "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "%" }, "context": { "backgroundColorThresholds": [ { "from": 95, "value": "#65a637" }, { "from": 90, "to": 95, "value": "#6db7c6" }, { "from": 87, "to": 90, "value": "#f7bc38" }, { "from": 85, "to": 87, "value": "#f58f39" }, { "to": 85, "value": "#d93f3c" } ] }, "dataSources": { "primary": "noiseReductionSearch" } }, "percentAck": { "title": "Episodes Acknowledged", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "numberPrecision": 2, "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "%" }, "dataSources": { "primary": "percentAckSearch" } }, "mtta": { "title": "Mean Time to Acknowledged", "type": "splunk.singlevalue", "options": { "backgroundColor": "#ffffff", "sparklineDisplay": "off", "trendDisplay": "off", "trendValue": 0, "unit": "minutes" }, "dataSources": { "primary": "mttaSearch" } } }, "layout": { "type": "grid", "options": { "display": "auto-scale", "height": 240, "width": 1440 }, "structure": [ { "item": "vizQueryCounterSearch1", "type": "block", "position": { "x": 0, "y": 80, "w": 288, "h": 220 } }, { "item": "episodesBySeverity", "type": "block", "position": { "x": 288, "y": 80, "w": 288, "h": 220 } }, { "item": "noiseReduction", "type": "block", "position": { "x": 576, "y": 80, "w": 288, "h": 220 } }, { "item": "percentAck", "type": "block", "position": { "x": 864, "y": 80, "w": 288, "h": 220 } }, { "item": "mtta", "type": "block", "position": { "x": 1152, "y": 80, "w": 288, "h": 220 } } ] } }       I really appreciate your help, have a great day
hello all can help me for this? i get data like this abc=1|productName= SHAMPTS JODAC RL MTV 36X(4X60G);ABC MANIS RL 12X720G;SO KLIN ROSE FRESH LIQ 24X200ML|field23=tip  i want to extract produ... See more...
hello all can help me for this? i get data like this abc=1|productName= SHAMPTS JODAC RL MTV 36X(4X60G);ABC MANIS RL 12X720G;SO KLIN ROSE FRESH LIQ 24X200ML|field23=tip  i want to extract productName but can't extract because value productName not using " " so I'm confused to extract it, I've tried it using the spl command | makemv delim=";" productName but the only result is SHAMPTS JODAC RL MTV 36X(4X60G). the rest doesn't appear. and also using regex with the command | makemv tokenizer="(([[:alnum:]]+ )+([[:word:]]+))" productName but the result is still the same. so is there any suggestion so that the value after ; can be extracted?
Hello, in clustered environment or standalone, after upgrading first Splunk core then Splunk ES, incident review not working anymore, not showing any notable. The macro `notable` is in error and we... See more...
Hello, in clustered environment or standalone, after upgrading first Splunk core then Splunk ES, incident review not working anymore, not showing any notable. The macro `notable` is in error and we can see SA-utils python errors in log files.
Hi Splunk Experts, I had configured HEC and tried to send logs data via OTEL collector but I don't find service for collector. So, kindly suggest how to enable collector service to receive data from... See more...
Hi Splunk Experts, I had configured HEC and tried to send logs data via OTEL collector but I don't find service for collector. So, kindly suggest how to enable collector service to receive data from OTEL Collector. Much appreciated for your inputs. Regards, Eshwar
Hello All, I am looking for a query that can provide me with a list of sourcetypes that have not been searched .Kindly suggest.
For example I have a link to a specific trace:  https://xxxx.signalfx.com/#/apm/traces/2459682daf1fe95db9bbff2042a1ec0e This for example will show me all the trace water fall from the beggining of t... See more...
For example I have a link to a specific trace:  https://xxxx.signalfx.com/#/apm/traces/2459682daf1fe95db9bbff2042a1ec0e This for example will show me all the trace water fall from the beggining of the trace. Now, I want to be able to access this trace from a specific start_time and see till end_time. Is it possible? If yes, what should be the correct link?
How to fix"Could not load lookup=LOOKUP-autolookup_prices"
I have this query index=x host=y "searchTerm" | stats Avg(Field1) Avg(Field2) which returns a count of N statistics. I would like modify my query such that (first stats value) statistics[0], ... See more...
I have this query index=x host=y "searchTerm" | stats Avg(Field1) Avg(Field2) which returns a count of N statistics. I would like modify my query such that (first stats value) statistics[0], (middle stats value) ((statistics[0]+statistics[N])/length(statistics)), (final stats value) statistics(N) are returned in the same query I have tried using head and tail but that still limits it to the specified value after 'head' or 'tail'. What other options are available?  
What would cause a command line query ( bin/splunk search "..." ) to return duplicate results over what the UI would return?
Hello everyone, I'd like to start out by saying I'm really quite new to Splunk, and we run older versions(6.6.3 and 7.2.3). I'm looking to have a search that will do the following: - Look up the c... See more...
Hello everyone, I'd like to start out by saying I'm really quite new to Splunk, and we run older versions(6.6.3 and 7.2.3). I'm looking to have a search that will do the following: - Look up the current hosts in our system, which I can get with the following search     index=* "daily.cvd" | dedup host | table host      - Then compare to a CSV file that has 1 column with A1 being "host" and then all other entries are the hosts that SHOULD be present/accounted for. -- Using ChatGPT I was able to get something like below which on it's own will properly read the CSV file and output the hosts in it.     | append [ | inputlookup hosts.csv | rename host as known_hosts | stats values(known_hosts) as known_hosts ] | eval source="current" | eval status=if(isnull(mvfind(known_hosts, current_hosts)), "New", "Existing") | eval status=if(isnull(mvfind(current_hosts, known_hosts)), "Missing", status) | mvexpand current_hosts | mvexpand known_hosts | table current_hosts, known_hosts, status     - However when I combine the 2, it will show me 118 results(should only be 59) and there are no results in the "current_hosts" column, and after 59 blank results, the "known_hosts" will then show the correct results from the CSV.     index=* "daily.cvd" | dedup host | table host | append [ | inputlookup hosts.csv | rename host as known_hosts | stats values(known_hosts) as known_hosts ] | eval source="current" | eval status=if(isnull(mvfind(known_hosts, current_hosts)), "New", "Existing") | eval status=if(isnull(mvfind(current_hosts, known_hosts)), "Missing", status) | mvexpand current_hosts | mvexpand known_hosts | table current_hosts, known_hosts, status     I'd love to have any help on this, I'm wouldn't be surprised if ChatGPT is making things more difficult than needed.  Thanks in advance!
Hello,  I'm new to Splunk synthetic platform and looking for guidance on how below alert conditions work Test 1: Scheduled to run every 1 minute. So does this mean, an alert email triggered when t... See more...
Hello,  I'm new to Splunk synthetic platform and looking for guidance on how below alert conditions work Test 1: Scheduled to run every 1 minute. So does this mean, an alert email triggered when the test fails 3 times in a row (of 1min frequency)?   Test 2: Scheduled to run every 30 minutes. So does this mean, an alert email triggered when the test fails at any time during the scheduled frequency?  
Hi Experts, My data source consists of a CSV file containing columns such as TIMESTAMP, APPLICATION, MENU_DES, REPORTING_DEPT, USER_TYPE, and USR_ID. I have developed a Dashboard that includes a tim... See more...
Hi Experts, My data source consists of a CSV file containing columns such as TIMESTAMP, APPLICATION, MENU_DES, REPORTING_DEPT, USER_TYPE, and USR_ID. I have developed a Dashboard that includes a time picker and a pivot table utilizing this data source. Currently, the user wishes to filter the pivot table by APPLICATION. I have implemented a dropdown menu for APPLICATION and established a search query accordingly. However, the dropdown only displays "All," and the search query dont seeem to be returning values to the dropdown list. Additionally, I need to incorporate a filter condition for APPLICATION in the pivot table based on the selection made from the dropdown menu. Could you please assist me with this? Below is my dashboard code.     <form hideChrome="true" version="1.1"> <label>Screen log view</label> <fieldset submitButton="false" autoRun="false">> <input type="time" token="field1"> <label></label> <default> <earliest>-30d@d</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="SelectedApp" searchWhenChanged="true"> <label>Application Name</label> <search> <query> index="idxmainframe" source="*_screen_log.CSV" | table APPLICATION | dedup APPLICATION | sort APPLICATION </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <fieldForLabel>apps</fieldForLabel> <fieldForValue>apps</fieldForValue> <choice value="*">All</choice> <default>All</default> </input> </fieldset> <row> <panel> <table> <search> <query>| pivot screen ds dc(USR_ID) AS "Distinct Count of USR_ID" SPLITROW APPLICATION AS APPLICATION SPLITROW MENU_DES AS MENU_DES SPLITROW REPORTING_DEPT AS REPORTING_DEPT SPLITCOL USER_TYPE BOTTOM 0 dc(USR_ID) ROWSUMMARY 0 COLSUMMARY 0 NUMCOLS 100 SHOWOTHER 1 | sort 0 APPLICATION MENU_DES REPORTING_DEPT </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>                                                
I'm comparing two indexes, A and B, using the hostname as the common field. My current search successfully identifies whether each hostname in index A is present in index B. However, I also want to i... See more...
I'm comparing two indexes, A and B, using the hostname as the common field. My current search successfully identifies whether each hostname in index A is present in index B. However, I also want to include additional information from index A, such as the operating system and device type, in the output. This information is not present in index B. How can I modify my query to display the operating system alongside the status (missing/ok) for each hostname? below is the query I am using index=A sourcetype="Any" | eval Hostname=lower(Hostname) | table Hostname | dedup Hostname | append [ search index=B sourcetype="foo" | eval Hostname=lower(Reporting_Host) | table Hostname | dedup Hostname ] | stats count by Hostname | eval match=if(count=1, "missing", "ok")
Hello Splunk Community, We are currently using Splunk Enterprise 9.1.5 and DB Connect 3.7 to collect data from a Snowflake database view. The view returns data correctly when queried directly via ... See more...
Hello Splunk Community, We are currently using Splunk Enterprise 9.1.5 and DB Connect 3.7 to collect data from a Snowflake database view. The view returns data correctly when queried directly via SQL. Here are the specifics of our setup and the issue we're encountering: Data Collection Interval: Every 11 minutes Data Volume: Approximately 75,000 to 80,000 events per day, with peak times around 7 AM to 9 AM CST and 2 PM to 4 PM CST (approximately 20,000 events during these periods) Unique Identifier: The data contains a unique ID column generated by a sequence that increments by 1 Timestamp Column: The table includes a STARTDATE column, which is a Timestamp_NTZ (no timezone) in UTC time Our DB Connect configuration is as follows: Rising Column: ID Metadata: _time is set to the STARTDATE field The issue we're facing is that Splunk is not ingesting all the data; approximately 30% of the data is missing. The ID column has been verified to be unique, so we suspect that the STARTDATE might be causing the issue. Although each event has a unique ID, the STARTDATE may not be unique since multiple events can occur simultaneously in our large environment. Has anyone encountered a similar issue, or does anyone have suggestions on how to address this problem? Any insights would be greatly appreciated. Thank you!