All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I understand correctly, you would like the final output to be two columns, where one shows the machines that SHOULD appear, and the second shows the machines that DO appear? Then you could see whi... See more...
If I understand correctly, you would like the final output to be two columns, where one shows the machines that SHOULD appear, and the second shows the machines that DO appear? Then you could see which machines are not appearing and therefore need attention? E.g. SHOULD_APPEAR DO_APPEAR host1 host1 host2   host3 host3 ... ...
Hello everyone, I'd like to start out by saying I'm really quite new to Splunk, and we run older versions(6.6.3 and 7.2.3). I'm looking to have a search that will do the following: - Look up the c... See more...
Hello everyone, I'd like to start out by saying I'm really quite new to Splunk, and we run older versions(6.6.3 and 7.2.3). I'm looking to have a search that will do the following: - Look up the current hosts in our system, which I can get with the following search     index=* "daily.cvd" | dedup host | table host      - Then compare to a CSV file that has 1 column with A1 being "host" and then all other entries are the hosts that SHOULD be present/accounted for. -- Using ChatGPT I was able to get something like below which on it's own will properly read the CSV file and output the hosts in it.     | append [ | inputlookup hosts.csv | rename host as known_hosts | stats values(known_hosts) as known_hosts ] | eval source="current" | eval status=if(isnull(mvfind(known_hosts, current_hosts)), "New", "Existing") | eval status=if(isnull(mvfind(current_hosts, known_hosts)), "Missing", status) | mvexpand current_hosts | mvexpand known_hosts | table current_hosts, known_hosts, status     - However when I combine the 2, it will show me 118 results(should only be 59) and there are no results in the "current_hosts" column, and after 59 blank results, the "known_hosts" will then show the correct results from the CSV.     index=* "daily.cvd" | dedup host | table host | append [ | inputlookup hosts.csv | rename host as known_hosts | stats values(known_hosts) as known_hosts ] | eval source="current" | eval status=if(isnull(mvfind(known_hosts, current_hosts)), "New", "Existing") | eval status=if(isnull(mvfind(current_hosts, known_hosts)), "Missing", status) | mvexpand current_hosts | mvexpand known_hosts | table current_hosts, known_hosts, status     I'd love to have any help on this, I'm wouldn't be surprised if ChatGPT is making things more difficult than needed.  Thanks in advance!
This is not a reliable way. If any other host mentions the host we're after, such event will get routed to syslog...
index=A sourcetype="Any" | eval Hostname=lower(Hostname) | table Hostname os device_type ```# Include os and device_type fields``` | dedup Hostname | append [ search index=B sourcetype="foo"... See more...
index=A sourcetype="Any" | eval Hostname=lower(Hostname) | table Hostname os device_type ```# Include os and device_type fields``` | dedup Hostname | append [ search index=B sourcetype="foo" | eval Hostname=lower(Reporting_Host) | table Hostname | dedup Hostname ] | stats values(os) as os values(device_type) as device_type count by Hostname | eval match=if(count=1, "missing", "ok") | table Hostname os device_type match ------ If you find this solution helpful, please consider accepting it and awarding karma points !!    
Hello,  I'm new to Splunk synthetic platform and looking for guidance on how below alert conditions work Test 1: Scheduled to run every 1 minute. So does this mean, an alert email triggered when t... See more...
Hello,  I'm new to Splunk synthetic platform and looking for guidance on how below alert conditions work Test 1: Scheduled to run every 1 minute. So does this mean, an alert email triggered when the test fails 3 times in a row (of 1min frequency)?   Test 2: Scheduled to run every 30 minutes. So does this mean, an alert email triggered when the test fails at any time during the scheduled frequency?  
Does using alltime help?
I am using Splunk Enterprise 9.2.1. on CentOS Linux kernel 3.10.0-1160.119.1.el7.x86_64 and my desktop OS is Windows 10 Enterprise.  I do not switch to RTL as I exclusively use LTR.  In this case, th... See more...
I am using Splunk Enterprise 9.2.1. on CentOS Linux kernel 3.10.0-1160.119.1.el7.x86_64 and my desktop OS is Windows 10 Enterprise.  I do not switch to RTL as I exclusively use LTR.  In this case, the RTL characters are included as titles in some data. I got it to work by creating a macro for the eval function, and only pasting in the RTL text as the very last step before saving it.   Then I just added the macro to my search query so I did not need to include any of the RTL encoded characters in the search itself explicitly.  
Hi Experts, My data source consists of a CSV file containing columns such as TIMESTAMP, APPLICATION, MENU_DES, REPORTING_DEPT, USER_TYPE, and USR_ID. I have developed a Dashboard that includes a tim... See more...
Hi Experts, My data source consists of a CSV file containing columns such as TIMESTAMP, APPLICATION, MENU_DES, REPORTING_DEPT, USER_TYPE, and USR_ID. I have developed a Dashboard that includes a time picker and a pivot table utilizing this data source. Currently, the user wishes to filter the pivot table by APPLICATION. I have implemented a dropdown menu for APPLICATION and established a search query accordingly. However, the dropdown only displays "All," and the search query dont seeem to be returning values to the dropdown list. Additionally, I need to incorporate a filter condition for APPLICATION in the pivot table based on the selection made from the dropdown menu. Could you please assist me with this? Below is my dashboard code.     <form hideChrome="true" version="1.1"> <label>Screen log view</label> <fieldset submitButton="false" autoRun="false">> <input type="time" token="field1"> <label></label> <default> <earliest>-30d@d</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="SelectedApp" searchWhenChanged="true"> <label>Application Name</label> <search> <query> index="idxmainframe" source="*_screen_log.CSV" | table APPLICATION | dedup APPLICATION | sort APPLICATION </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <fieldForLabel>apps</fieldForLabel> <fieldForValue>apps</fieldForValue> <choice value="*">All</choice> <default>All</default> </input> </fieldset> <row> <panel> <table> <search> <query>| pivot screen ds dc(USR_ID) AS "Distinct Count of USR_ID" SPLITROW APPLICATION AS APPLICATION SPLITROW MENU_DES AS MENU_DES SPLITROW REPORTING_DEPT AS REPORTING_DEPT SPLITCOL USER_TYPE BOTTOM 0 dc(USR_ID) ROWSUMMARY 0 COLSUMMARY 0 NUMCOLS 100 SHOWOTHER 1 | sort 0 APPLICATION MENU_DES REPORTING_DEPT </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>                                                
I'm comparing two indexes, A and B, using the hostname as the common field. My current search successfully identifies whether each hostname in index A is present in index B. However, I also want to i... See more...
I'm comparing two indexes, A and B, using the hostname as the common field. My current search successfully identifies whether each hostname in index A is present in index B. However, I also want to include additional information from index A, such as the operating system and device type, in the output. This information is not present in index B. How can I modify my query to display the operating system alongside the status (missing/ok) for each hostname? below is the query I am using index=A sourcetype="Any" | eval Hostname=lower(Hostname) | table Hostname | dedup Hostname | append [ search index=B sourcetype="foo" | eval Hostname=lower(Reporting_Host) | table Hostname | dedup Hostname ] | stats count by Hostname | eval match=if(count=1, "missing", "ok")
What version of Splunk are you using? What OS are you using on your desktop? What do you use to switch the the input fro LTR to RTL?
Thanks for the idea. Unfortunately that's not going to work. I have to use a lookup table to get the Site and mstat insists on being the first command in the query.  index=metrics host=* | rex ... See more...
Thanks for the idea. Unfortunately that's not going to work. I have to use a lookup table to get the Site and mstat insists on being the first command in the query.  index=metrics host=* | rex field=host "^(?<host>[\w\d-]+)\." | lookup dns.csv sd_hostname AS host | timechart span=5m partial=f limit=0 per_second(Query) as QPS by Site I also tried using mstat BY host but that did not return any results.
In my case I was sending TCP info (JSON) through API REST, I had to recreate my source type configuration like this: Name: Whatever Description: Whatever Destination App: Whatever Category: What... See more...
In my case I was sending TCP info (JSON) through API REST, I had to recreate my source type configuration like this: Name: Whatever Description: Whatever Destination App: Whatever Category: Whatever Indexed extractions: json Next in the Advanced TAB, you need to add this extra setting: KV_MODE = none The reason is that the json I send via API already contains the event attribute in the splunk expected way, so KV_MODE (key value mode) should be set to none, like this way you avoid double parsing the event json data. { "sourcetype": "MyCustomSourceType", "index": "index-name", "event": { "a": "aa", "n": 1, ..... } }  
Hi @Fadil.CK, Thanks for sharing the solution! 
In general, recommended practice is to have lower-tier processes run an older version than the higher-tier processes (tiers go from forwarders up to indexers, search heads, and managers).  Since Crib... See more...
In general, recommended practice is to have lower-tier processes run an older version than the higher-tier processes (tiers go from forwarders up to indexers, search heads, and managers).  Since Cribl is in the mix, however, it's more important for the forwarders to run a version that is compatible with the workers.
Hello Splunk Community, We are currently using Splunk Enterprise 9.1.5 and DB Connect 3.7 to collect data from a Snowflake database view. The view returns data correctly when queried directly via ... See more...
Hello Splunk Community, We are currently using Splunk Enterprise 9.1.5 and DB Connect 3.7 to collect data from a Snowflake database view. The view returns data correctly when queried directly via SQL. Here are the specifics of our setup and the issue we're encountering: Data Collection Interval: Every 11 minutes Data Volume: Approximately 75,000 to 80,000 events per day, with peak times around 7 AM to 9 AM CST and 2 PM to 4 PM CST (approximately 20,000 events during these periods) Unique Identifier: The data contains a unique ID column generated by a sequence that increments by 1 Timestamp Column: The table includes a STARTDATE column, which is a Timestamp_NTZ (no timezone) in UTC time Our DB Connect configuration is as follows: Rising Column: ID Metadata: _time is set to the STARTDATE field The issue we're facing is that Splunk is not ingesting all the data; approximately 30% of the data is missing. The ID column has been verified to be unique, so we suspect that the STARTDATE might be causing the issue. Although each event has a unique ID, the STARTDATE may not be unique since multiple events can occur simultaneously in our large environment. Has anyone encountered a similar issue, or does anyone have suggestions on how to address this problem? Any insights would be greatly appreciated. Thank you!
If you have a search head cluster on prem try electing a new captain to force push a new SHC bundle. If that doesn't work then more information would be required about how user and roles are working... See more...
If you have a search head cluster on prem try electing a new captain to force push a new SHC bundle. If that doesn't work then more information would be required about how user and roles are working and if you have any thing has changed there.  Is there anything via auth .conf doesn't show up anymore.
Hi @Gravoc , at first check if the lookup name is correct (it's case sensitive). Then check if you see the lookup using the Splunk Lookup Editor App. Then check if you have created also the Lookup... See more...
Hi @Gravoc , at first check if the lookup name is correct (it's case sensitive). Then check if you see the lookup using the Splunk Lookup Editor App. Then check if you have created also the Lookup definition for this lookup. At least check the grants on lookup and lookup definition. Ciao. Giuseppe
Hi @tschmoney1337 , please share your full search because you can modify the field name in rows but not in columns. e.g. if you have a timestamp, you should use stats and eval, and then put in colu... See more...
Hi @tschmoney1337 , please share your full search because you can modify the field name in rows but not in columns. e.g. if you have a timestamp, you should use stats and eval, and then put in columns: <your_search> | bin span=1mon _time | stats count BY _time | eval current_value = strftime(_time, "%B")."_value" | table current_value count | transpose column_name=current_value header_field=current_value I cannopt test it , but it should be correct or very near. Ciao. Giuseppe
Hi Splunk Experts, I hope to get a quick hint on my issue. I have a Splunk Cloud setup with two search heads, one of which is dedicated to Enterprise Security. I have different lookups on this searc... See more...
Hi Splunk Experts, I hope to get a quick hint on my issue. I have a Splunk Cloud setup with two search heads, one of which is dedicated to Enterprise Security. I have different lookups on this search head containing, e.g., all user attributes. I wanted to enhance a specific search using the lookup command as described in the documentation. Additionally, I can access and view the lookup with the inputlookup command, confirming the file’s existence and proper permissions on the search head. The search I have trouble with (simplified):   index=main source_type=some_event_related_to_users | lookup ldap_users.csv identity as src_user   However, this search instantaneously fails with:   [idx-[...].splunkcloud.com,idx-[...].splunkcloud.com,idx-[...].splunkcloud.com] The lookup table 'ldap_users.csv' does not exist or is not available.     I must confess I am rather new to Splunk and even newer to running a Splunk cluster. So I do not really understand why my indexers are looking for the file in the first place. I assumed that the search head would handle the lookup. In addition, as I am a Splunk Cloud customer, I don’t have access to the indexers anyway. Can someone give me a pointer on how to achieve such a query in a Splunk Cloud Environment?
Hi Team, Currently, we are using Splunk UF agents which is installed on all infra servers and which receives configuration from Deployment servers and both are running under 9.1.2 version. And t... See more...
Hi Team, Currently, we are using Splunk UF agents which is installed on all infra servers and which receives configuration from Deployment servers and both are running under 9.1.2 version. And these logs are getting forwarded to Splunk cloud console via Cribl workers. And the Splunk cloud instance indexer and search head running with 9.2.2 version. Now, our ask is if we upgrade our Splunk UF and Splunk enterprise version on deployment servers from 9.1.2 to 9.3.0, will it impact the cloud components (due to compatibility issues) or will it not impact as these cloud components receives logs indirectly via cribl? Could you please clarify?