All Topics

Top

All Topics

Hello, in clustered environment or standalone, after upgrading first Splunk core then Splunk ES, incident review not working anymore, not showing any notable. The macro `notable` is in error and we... See more...
Hello, in clustered environment or standalone, after upgrading first Splunk core then Splunk ES, incident review not working anymore, not showing any notable. The macro `notable` is in error and we can see SA-utils python errors in log files.
Hi Splunk Experts, I had configured HEC and tried to send logs data via OTEL collector but I don't find service for collector. So, kindly suggest how to enable collector service to receive data from... See more...
Hi Splunk Experts, I had configured HEC and tried to send logs data via OTEL collector but I don't find service for collector. So, kindly suggest how to enable collector service to receive data from OTEL Collector. Much appreciated for your inputs. Regards, Eshwar
Hello All, I am looking for a query that can provide me with a list of sourcetypes that have not been searched .Kindly suggest.
For example I have a link to a specific trace:  https://xxxx.signalfx.com/#/apm/traces/2459682daf1fe95db9bbff2042a1ec0e This for example will show me all the trace water fall from the beggining of t... See more...
For example I have a link to a specific trace:  https://xxxx.signalfx.com/#/apm/traces/2459682daf1fe95db9bbff2042a1ec0e This for example will show me all the trace water fall from the beggining of the trace. Now, I want to be able to access this trace from a specific start_time and see till end_time. Is it possible? If yes, what should be the correct link?
How to fix"Could not load lookup=LOOKUP-autolookup_prices"
I have this query index=x host=y "searchTerm" | stats Avg(Field1) Avg(Field2) which returns a count of N statistics. I would like modify my query such that (first stats value) statistics[0], ... See more...
I have this query index=x host=y "searchTerm" | stats Avg(Field1) Avg(Field2) which returns a count of N statistics. I would like modify my query such that (first stats value) statistics[0], (middle stats value) ((statistics[0]+statistics[N])/length(statistics)), (final stats value) statistics(N) are returned in the same query I have tried using head and tail but that still limits it to the specified value after 'head' or 'tail'. What other options are available?  
What would cause a command line query ( bin/splunk search "..." ) to return duplicate results over what the UI would return?
Hello everyone, I'd like to start out by saying I'm really quite new to Splunk, and we run older versions(6.6.3 and 7.2.3). I'm looking to have a search that will do the following: - Look up the c... See more...
Hello everyone, I'd like to start out by saying I'm really quite new to Splunk, and we run older versions(6.6.3 and 7.2.3). I'm looking to have a search that will do the following: - Look up the current hosts in our system, which I can get with the following search     index=* "daily.cvd" | dedup host | table host      - Then compare to a CSV file that has 1 column with A1 being "host" and then all other entries are the hosts that SHOULD be present/accounted for. -- Using ChatGPT I was able to get something like below which on it's own will properly read the CSV file and output the hosts in it.     | append [ | inputlookup hosts.csv | rename host as known_hosts | stats values(known_hosts) as known_hosts ] | eval source="current" | eval status=if(isnull(mvfind(known_hosts, current_hosts)), "New", "Existing") | eval status=if(isnull(mvfind(current_hosts, known_hosts)), "Missing", status) | mvexpand current_hosts | mvexpand known_hosts | table current_hosts, known_hosts, status     - However when I combine the 2, it will show me 118 results(should only be 59) and there are no results in the "current_hosts" column, and after 59 blank results, the "known_hosts" will then show the correct results from the CSV.     index=* "daily.cvd" | dedup host | table host | append [ | inputlookup hosts.csv | rename host as known_hosts | stats values(known_hosts) as known_hosts ] | eval source="current" | eval status=if(isnull(mvfind(known_hosts, current_hosts)), "New", "Existing") | eval status=if(isnull(mvfind(current_hosts, known_hosts)), "Missing", status) | mvexpand current_hosts | mvexpand known_hosts | table current_hosts, known_hosts, status     I'd love to have any help on this, I'm wouldn't be surprised if ChatGPT is making things more difficult than needed.  Thanks in advance!
Hello,  I'm new to Splunk synthetic platform and looking for guidance on how below alert conditions work Test 1: Scheduled to run every 1 minute. So does this mean, an alert email triggered when t... See more...
Hello,  I'm new to Splunk synthetic platform and looking for guidance on how below alert conditions work Test 1: Scheduled to run every 1 minute. So does this mean, an alert email triggered when the test fails 3 times in a row (of 1min frequency)?   Test 2: Scheduled to run every 30 minutes. So does this mean, an alert email triggered when the test fails at any time during the scheduled frequency?  
Hi Experts, My data source consists of a CSV file containing columns such as TIMESTAMP, APPLICATION, MENU_DES, REPORTING_DEPT, USER_TYPE, and USR_ID. I have developed a Dashboard that includes a tim... See more...
Hi Experts, My data source consists of a CSV file containing columns such as TIMESTAMP, APPLICATION, MENU_DES, REPORTING_DEPT, USER_TYPE, and USR_ID. I have developed a Dashboard that includes a time picker and a pivot table utilizing this data source. Currently, the user wishes to filter the pivot table by APPLICATION. I have implemented a dropdown menu for APPLICATION and established a search query accordingly. However, the dropdown only displays "All," and the search query dont seeem to be returning values to the dropdown list. Additionally, I need to incorporate a filter condition for APPLICATION in the pivot table based on the selection made from the dropdown menu. Could you please assist me with this? Below is my dashboard code.     <form hideChrome="true" version="1.1"> <label>Screen log view</label> <fieldset submitButton="false" autoRun="false">> <input type="time" token="field1"> <label></label> <default> <earliest>-30d@d</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="SelectedApp" searchWhenChanged="true"> <label>Application Name</label> <search> <query> index="idxmainframe" source="*_screen_log.CSV" | table APPLICATION | dedup APPLICATION | sort APPLICATION </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <fieldForLabel>apps</fieldForLabel> <fieldForValue>apps</fieldForValue> <choice value="*">All</choice> <default>All</default> </input> </fieldset> <row> <panel> <table> <search> <query>| pivot screen ds dc(USR_ID) AS "Distinct Count of USR_ID" SPLITROW APPLICATION AS APPLICATION SPLITROW MENU_DES AS MENU_DES SPLITROW REPORTING_DEPT AS REPORTING_DEPT SPLITCOL USER_TYPE BOTTOM 0 dc(USR_ID) ROWSUMMARY 0 COLSUMMARY 0 NUMCOLS 100 SHOWOTHER 1 | sort 0 APPLICATION MENU_DES REPORTING_DEPT </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>                                                
I'm comparing two indexes, A and B, using the hostname as the common field. My current search successfully identifies whether each hostname in index A is present in index B. However, I also want to i... See more...
I'm comparing two indexes, A and B, using the hostname as the common field. My current search successfully identifies whether each hostname in index A is present in index B. However, I also want to include additional information from index A, such as the operating system and device type, in the output. This information is not present in index B. How can I modify my query to display the operating system alongside the status (missing/ok) for each hostname? below is the query I am using index=A sourcetype="Any" | eval Hostname=lower(Hostname) | table Hostname | dedup Hostname | append [ search index=B sourcetype="foo" | eval Hostname=lower(Reporting_Host) | table Hostname | dedup Hostname ] | stats count by Hostname | eval match=if(count=1, "missing", "ok")
The Notification Team is migrating our email service provider. As the rollout progresses, Splunk has enabled the new email service provider for Splunk Cloud IL2, FedRAMP Moderate, and Fedramp High cu... See more...
The Notification Team is migrating our email service provider. As the rollout progresses, Splunk has enabled the new email service provider for Splunk Cloud IL2, FedRAMP Moderate, and Fedramp High customers which are driven by customer requests from Splunk Ideas. Background Customer-configured email-based alerting is a first-class workflow supported by Splunk. We know how vital alerting can be to our customers. We are pleased to announce that Splunk Cloud FedRAMP High customers are also able to send email notifications to themselves for critically configured email-based alerts from the stacks now. Please take a moment to review a summary of the changes being introduced. Summary of Changes The email notification will be available for 4 use cases below sendemail SPL command Saved search  Emails for backgrounded jobs  Emails for health reports of the stack Emails initiated from within Apps Features Description Customer Impact MAIL FROM The MAIL FROM value in the SMTP envelope of emails originating from Splunk Cloud stacks will be mail.splunkcloudfed.com. The From field in these emails will be set to alerts@splunkcloudfed.com No downtime is expected and no action is required.    If you need clarification on any existing network policies on your end, please contact Customer Support so we can work with you to help ensure that you continue receiving email-based alerts. Dynamic IP addresses for origin mail server The origin email server is expected to have a dynamic IP address range in the future  Email size Email size, including body/text/images/attachments, is up to 40MB.  
Hello Splunk Community, We are currently using Splunk Enterprise 9.1.5 and DB Connect 3.7 to collect data from a Snowflake database view. The view returns data correctly when queried directly via ... See more...
Hello Splunk Community, We are currently using Splunk Enterprise 9.1.5 and DB Connect 3.7 to collect data from a Snowflake database view. The view returns data correctly when queried directly via SQL. Here are the specifics of our setup and the issue we're encountering: Data Collection Interval: Every 11 minutes Data Volume: Approximately 75,000 to 80,000 events per day, with peak times around 7 AM to 9 AM CST and 2 PM to 4 PM CST (approximately 20,000 events during these periods) Unique Identifier: The data contains a unique ID column generated by a sequence that increments by 1 Timestamp Column: The table includes a STARTDATE column, which is a Timestamp_NTZ (no timezone) in UTC time Our DB Connect configuration is as follows: Rising Column: ID Metadata: _time is set to the STARTDATE field The issue we're facing is that Splunk is not ingesting all the data; approximately 30% of the data is missing. The ID column has been verified to be unique, so we suspect that the STARTDATE might be causing the issue. Although each event has a unique ID, the STARTDATE may not be unique since multiple events can occur simultaneously in our large environment. Has anyone encountered a similar issue, or does anyone have suggestions on how to address this problem? Any insights would be greatly appreciated. Thank you!
Hi Splunk Experts, I hope to get a quick hint on my issue. I have a Splunk Cloud setup with two search heads, one of which is dedicated to Enterprise Security. I have different lookups on this searc... See more...
Hi Splunk Experts, I hope to get a quick hint on my issue. I have a Splunk Cloud setup with two search heads, one of which is dedicated to Enterprise Security. I have different lookups on this search head containing, e.g., all user attributes. I wanted to enhance a specific search using the lookup command as described in the documentation. Additionally, I can access and view the lookup with the inputlookup command, confirming the file’s existence and proper permissions on the search head. The search I have trouble with (simplified):   index=main source_type=some_event_related_to_users | lookup ldap_users.csv identity as src_user   However, this search instantaneously fails with:   [idx-[...].splunkcloud.com,idx-[...].splunkcloud.com,idx-[...].splunkcloud.com] The lookup table 'ldap_users.csv' does not exist or is not available.     I must confess I am rather new to Splunk and even newer to running a Splunk cluster. So I do not really understand why my indexers are looking for the file in the first place. I assumed that the search head would handle the lookup. In addition, as I am a Splunk Cloud customer, I don’t have access to the indexers anyway. Can someone give me a pointer on how to achieve such a query in a Splunk Cloud Environment?
Hi Team, Currently, we are using Splunk UF agents which is installed on all infra servers and which receives configuration from Deployment servers and both are running under 9.1.2 version. And t... See more...
Hi Team, Currently, we are using Splunk UF agents which is installed on all infra servers and which receives configuration from Deployment servers and both are running under 9.1.2 version. And these logs are getting forwarded to Splunk cloud console via Cribl workers. And the Splunk cloud instance indexer and search head running with 9.2.2 version. Now, our ask is if we upgrade our Splunk UF and Splunk enterprise version on deployment servers from 9.1.2 to 9.3.0, will it impact the cloud components (due to compatibility issues) or will it not impact as these cloud components receives logs indirectly via cribl? Could you please clarify?
Hi everyone! I'm trying to figure out how to map a field name dynamically to a column of a table. as it stands the table looks like this: twomonth_value onemonth_value current_value 5 3 1... See more...
Hi everyone! I'm trying to figure out how to map a field name dynamically to a column of a table. as it stands the table looks like this: twomonth_value onemonth_value current_value 5 3 1   I want the output to be instead.. july_value august_value september_value 5 3 1   I am able to get the correct dynamic value of each month via | eval current_value = strftime(relative_time(now(), "@mon"), "%B")+."_value" However, i'm unsure on how to change the field name directly in the table. Thanks in advance!
Hey there, Splunk Community! Exciting news: Splunk’s GovSummit 2024 is returning to Washington, D.C. on Wednesday, December 11, and registration is now open! We are so thrilled to invite you and yo... See more...
Hey there, Splunk Community! Exciting news: Splunk’s GovSummit 2024 is returning to Washington, D.C. on Wednesday, December 11, and registration is now open! We are so thrilled to invite you and your colleagues to experience this no-cost, industry-leading event (psst, that means it's FREE!) We’ve got a full program with engaging speakers, networking opportunities, and more. Register today to discover the latest updates from Splunk — including new compliance investments and innovations in AI — that protect critical infrastructure, improve process automation, increase efficiency, and reduce visibility gaps. Attend a keynote, breakouts, bootcamps, and networking sessions and leave with actionable steps to strengthen your cybersecurity strategy by:  - Implementing Zero Trust - Designing a SOC for the future - Advancing cybersecurity and AI - Protecting critical infrastructure and high-value assets Hear from Splunk executives, including Gary Steele (President, Go to Market, Cisco), Mike Horn (Sr. Vice President, GM, Security), Bill Rowan (VP, Public Sector), and more! Event Details Wednesday, December 11, 2024 7:30 am - 4:30 pm EST Ronald Reagan Building and International Trade Center, Washington, D.C. Register now and we hope to see you there!
Hi Everyone, I am not Splunk engineer but I have task to do. sc4s.service is failed. Can't get the logs. It was working before.  As an error it says 'Unauthorized access'. But I don't have any cred... See more...
Hi Everyone, I am not Splunk engineer but I have task to do. sc4s.service is failed. Can't get the logs. It was working before.  As an error it says 'Unauthorized access'. But I don't have any credentials for that.  Environment="SC4S_IMAGE=docker.io/splunk/scs:latest"  Could you help me please how to fix it? Thanks, 
I am trying to write an eval expression to translate a few different languages into English.   One of the languages is Hebrew which is a right to left language, and when I use the Hebrew text in my q... See more...
I am trying to write an eval expression to translate a few different languages into English.   One of the languages is Hebrew which is a right to left language, and when I use the Hebrew text in my query, my cursor location is no longer predictable, and I cannot copy/paste the Hebrew into an otherwise left to right query expression.  I then tried to create a macro to do the evaluation, but I ran into the same issue.  Even using a different browser(Firefox vs. Brave), or a different program (notepad++), but I always encounter the cursor/keyboard anomalies after pasting the text into my query.   I need to translate a few different strings within a case eval expression.  Is anyone aware of any similar issues being encountered and/or of any potential work arounds?  Does someone have an alternate suggestion as to how I can accomplish the translations? Here is an example of what I am trying to do: | eval appName = case(appName="플레이어","player",appName="티빙","Tving",appName=... This Hebrew text is an example of where I run into issues: כאן ארכיון