All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am able to see our splunk saved searches from both Power BI as well as Tableau.  There is a saved search created that I am trying to access, and this search is small - 4 columns and 8 rows.  I am a... See more...
I am able to see our splunk saved searches from both Power BI as well as Tableau.  There is a saved search created that I am trying to access, and this search is small - 4 columns and 8 rows.  I am able to see it in the list of saved searches in both tools, but when I try to load it in either BI tool so I can build a report, it times out and gives me a message saying - SPLUNKODBC (40) Error with HTTP API.  Failed writing received data to disk/application.  I am not familiar with Splunk, but when you access a saved search, even a small one like this, is the query behind the saved search being executed?  Even though the result set is small, I'm thinking maybe the saved search is not static, so a query is being launched each time I try to connect to the search.  If this is the case, is it possible to create a static saved search in splunk such that a BI tool can connect to it without a query being launched?  Or is there anything else that is giving me the problem?
Hi, The lookup field values must match the field values returned by the query, and the results must be shown as yes/no depending on whether the match happens. but we are unable to match and are unab... See more...
Hi, The lookup field values must match the field values returned by the query, and the results must be shown as yes/no depending on whether the match happens. but we are unable to match and are unable to publish all of the information from the lookup fields in the results. Please assist. My lookup file:- My query:- index = * sourcetype=* host=* | rex field=source "\/u02\/logs\/patch_(?<domain_name>.+).log"| rex field=_raw max_match=0 "\s(?<Patch_num>[^ ]+);" | dedup host | mvexpand Patch_num | lookup soa_nonprod_Q_patches.csv Patch_num | table domain_name host Patch_num patchlist | eval match_status=if(match(Patch_num,patchlist),"Yes","No") | table domain_name host Patch_num match_status   Result output:-   18387355 value is missing in the Patch_num output and it should be 'No'  in matching_status field as thsi value is not available in the Search result field.      Regards, Satheesh 
My Web Datamodel was set to 3 months with 67 GB+ size on disk. I reduced the summary range to 1 month, and size on disk increased to 100 GB+ size on disk. This doesn't make sense, can someone help ex... See more...
My Web Datamodel was set to 3 months with 67 GB+ size on disk. I reduced the summary range to 1 month, and size on disk increased to 100 GB+ size on disk. This doesn't make sense, can someone help explain? I would've thought less time range equated to less size on disk. 
We are attempting to change the password across all Universal forwarders using the command found in this link which references this command: ./splunk edit user admin -password 'fflanda$' -role admi... See more...
We are attempting to change the password across all Universal forwarders using the command found in this link which references this command: ./splunk edit user admin -password 'fflanda$' -role admin -auth admin:changeme   Everything seems to run with no errors, however, how do we validate the password was actually changed to the correct one?   Prior to the change I would remotely browse a host with a UF (https://IP ADDRESS:8089) > Click on services > enter admin followed by default password and it would proceed to authenticate me.   After the password change I would follow the same process but instead the password box kept re-occurring after every password attempt indicating that the password may not have been set correctly.   How would we validate that the actual password has been change on the UF?
Hey, It has been several days that I'm trying to solve the following issue. I'm sending JSON data over tcp (9997), the data arrives completly but, the "_time" field that is saved is the actual in... See more...
Hey, It has been several days that I'm trying to solve the following issue. I'm sending JSON data over tcp (9997), the data arrives completly but, the "_time" field that is saved is the actual indexing time instead of  "time" field, that I've added before. I tried to configure props.conf as with JSON and with regex but both hasn't work. What makes things more wired is that some of the logs that contains another timestamp field are indexed with this field. I was also trying to play arround this but nothing helped.  I'm running an all-in-one deployment from a container. Thanks, ------------ props.conf     [evtx] MAX_DAYS_AGO = 10951 #INDEXED_EXTRACTIONS = JSON #KV_MODE = none #TRANSFORMS-jsonextraction = json_extract #TIME_PREFIX = time.:. #TIME_FORMAT = %Y-%m-%dT%H:%M:%S #TIMESTAMP_FIELDS = time CHARSET = UTF-8 INDEXED_EXTRACTIONS = json KV_MODE = none SHOULD_LINEMERGE = false disabled = false pulldown_type = 1 TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%6NZ TIMESTAMP_FIELDS=time LINE_BREAKER = ([\r\n]+) category = Structured description = ODATA SCPI MPL JSON       log sample - time is indexed  based on "Event.NewTime":     { "cribl_breaker":"fallback", "collector":"in_evtx_to_json", "time":"2023-05-18T11:41:20.687637Z", "EventData":{"NewTime":"2023-05-18T11:41:20.697000Z", "PreviousTime":"2023-05-18T11:41:20.707011Z", "ProcessId":"0x46c", "ProcessName":"C:\\Windows\\System32\\svchost.exe", "SubjectDomainName":"NT AUTHORITY", "SubjectLogonId":"0x3e5", "SubjectUserName":"LOCAL SERVICE", "SubjectUserSid":"S-1-5-19"}, "System":{"Channel":"Security", "Computer":"test123", "Correlation":null, "EventID":4616, "EventRecordID":108506811, "Execution":{"#attributes":{"ProcessID":4, "ThreadID":1752}}, "Keywords":"0x8020000000000000", "Level":0, "Opcode":0, "Provider":{"#attributes":{"Guid":"54849625-5478-4994-A5BA-3E3B0328C30D", "Name":"Microsoft-Windows-Security-Auditing"}}, "Security":null, "Task":12288, "TimeCreated":{"#attributes":{"SystemTime":"2023-05-18T11:41:20.687637Z"}}, "Version":1}}       log samle - time is indexed based on arrival time:     {"cribl_breaker":"fallback", "collector":"in_evtx_to_json", "time":"2023-05-06T20:01:58.205343Z", "EventData":{"HandleId":"0x6a30", "ObjectServer":"Security", "ProcessId":"0x4f4", "ProcessName":"C:\\Windows\\System32\\svchost.exe", "SubjectDomainName":"test.local", "SubjectLogonId":"0x3e4", "SubjectUserName":"test", "SubjectUserSid":"S-1-5-20"}, "System":{"Channel":"Security", "Computer":"test123", "Correlation":null, "EventID":4658, "EventRecordID":107343779, "Execution":{"#attributes":{"ProcessID":4, "ThreadID":109608}}, "Keywords":"0x8020000000000000", "Level":0, "Opcode":0, "Provider":{"#attributes":{"Guid":"54849625-5478-4994-A5BA-3E3B0328C30D", "Name":"Microsoft-Windows-Security-Auditing"}}, "Security":null, "Task":12800, "TimeCreated":{"#attributes":{"SystemTime":"2023-05-06T20:01:58.205343Z"}}, "Version":0}}        
I have this query to find hosts from a lookup that have zero events. There are about a 100 hosts and I can see that the query performance is slow with the use of subquery this way.  Any ideas to imp... See more...
I have this query to find hosts from a lookup that have zero events. There are about a 100 hosts and I can see that the query performance is slow with the use of subquery this way.  Any ideas to improve this? | inputlookup lookup.csv | join type=outer [search index=os sourcetype=ps "abc.pid" OR "abc.bin" | stats count as heartbeat by host ] | fillnull heartbeat value=0 | where heartbeat=0 | stats values(host) as failed_hosts
My company uses Splunk and we just migrated everything from Cloud Splunk over to Splunk Enterprise.   We manage quite a few servers and they are all configured similarly, however, a handful of them... See more...
My company uses Splunk and we just migrated everything from Cloud Splunk over to Splunk Enterprise.   We manage quite a few servers and they are all configured similarly, however, a handful of them are not flowing their logs to Splunk, even with a specified inputs.conf in /opt/splunkforwarder/etc/system/local/inputs.conf.   The file path that is specified in the Sourcetype within the inputs.conf file is identical to servers that are flowing properly.   Where might I be missing some information possibly?
Hi It's quite easy to find which monitor inputs are activated via host's inputs.conf by queuing those from UF's _internal log. But how I can check same for Windows additional components like WinRegM... See more...
Hi It's quite easy to find which monitor inputs are activated via host's inputs.conf by queuing those from UF's _internal log. But how I can check same for Windows additional components like WinRegMon or admon? Basically I can see all known possible win monitoring components by    index=_internal host=* sourcetype=splunkd source=*splunkd.log component=ModularInputs   But how to find which are activated, when I have to look those from hundreds of nodes over long period like 30 days? I hope to get something like this _time HOST WinEventLog <enabled or even which logs are enabled>  _time HOST batch //$SPLUNK_HOME\var\run\splunk\search_telemetry\*search_telemetry.json //$SPLUNK_HOME\var\spool\splunk //$SPLUNK_HOME\var\spool\splunk\...stash_hec //$SPLUNK_HOME\var\spool\splunk\...stash_new //$SPLUNK_HOME\var\spool\splunk\tracker.log* _time HOST monitor //$SPLUNK_HOME\etc\splunk.version //$SPLUNK_HOME\var\log\splunk //$SPLUNK_HOME\var\log\splunk\configuration_change.log //$SPLUNK_HOME\var\log\splunk\license_usage_summary.log //$SPLUNK_HOME\var\log\splunk\metrics.log //$SPLUNK_HOME\var\log\splunk\splunk_instrumentation_cloud.log* //$SPLUNK_HOME\var\log\splunk\splunkd.log //$SPLUNK_HOME\var\log\watchdog\watchdog.log* r. Ismo
tengo un tiempo excesivo de consultas en la carga de la CPU utilizada por un proceso, no logro recopilar los procesos (logs) que se van generando en una serie de procesos en el host en el que se ... See more...
tengo un tiempo excesivo de consultas en la carga de la CPU utilizada por un proceso, no logro recopilar los procesos (logs) que se van generando en una serie de procesos en el host en el que se ejecuta ¡donde puedo ver los log?, ¿Qué proceso es el q se esta gatillando? y sobre todo cual sería la solución  
Hi Team, Please help us on the below issue. Below is the sample event.   message: Dataframe row : {"_c0":{"0":"{","1":" \"compaction_table\": [","2":" \"md_proc_control_v2\"","3":" ... See more...
Hi Team, Please help us on the below issue. Below is the sample event.   message: Dataframe row : {"_c0":{"0":"{","1":" \"compaction_table\": [","2":" \"md_proc_control_v2\"","3":" \"md_source_control\"","4":" ]","5":" \"Timestamp\": \"2023\/06\/26 12:05:43\"","6":" \"compaction_status\": \"Successful\"","7":"}"}}   In the above message, we have an event with the compaction_table, timestamp and compaction_status. We have tried to extract the files for compaction table such as md_proc_control_v2, md_source_control  as a separate field by name List using the below SPL query. index="app_events_dwh2_de_int" _raw=*compac* | rex "(?:\"compaction_table[\\\\]+\": \[)(?<compactionlist>[^\s:]+[^\]]+)" | rex field=compactionlist max_match=0 "(?:[^\s:]+[^\s]+\s[\\\\]+)(?<List>[^\\\]+) But we are unable to extract those files using the above SPL query. We have extracted the compactionlist field like below. But we are unable to extract the List from the field compactionlist. We request you to kindly help us in extraction of the files md_proc_control_v2, md_source_control as separate field by name List and also the compaction status as a separate field and also the Timestamp as a separate field from the event. Below is the sample raw text for this.   Dataframe row : {"_c0":{"0":"{","1":" \"compaction_table\": [","2":" \"md_proc_control_v2\"","3":" \"md_source_control\"","4":" ]","5":" \"Timestamp\": \"2023\/06\/26 12:05:43\"","6":" \"compaction_status\": \"Successful\"","7":"}"}}    
Hi all, We're migrating from Splunk Connect for Kubernetes to OpenTelemetry Collector (otel) and noticed several differences, which are breaking our dashboards. For example, to get the pods infor... See more...
Hi all, We're migrating from Splunk Connect for Kubernetes to OpenTelemetry Collector (otel) and noticed several differences, which are breaking our dashboards. For example, to get the pods information (k8sObjects) from the otel collector, we have to search them with sourcetype="kube:object:pods" However from Splunk Connect it's sourcetype="kube:objects:pods" Notice the plural in objects. Another example is the pod field, which is different in the otel collector: Splunk Connect: pod::*$STRING* Otel: source::*$STRING* Is there a way how to align the Otel Collector to the above mentioned format? Or there some sort of list of the complete differences between otel and Splunk Connect? We have quite a lot productive dashboards and report and it would take big effort to change/check every single of them. Thanks much for help in advance. Stefan
we have a 6 node SHC Want to use the deployer to push out authorise.conf so that we can manage the user/role/index access centrally. Looking for an example of how you control which index is see... See more...
we have a 6 node SHC Want to use the deployer to push out authorise.conf so that we can manage the user/role/index access centrally. Looking for an example of how you control which index is seen by which user/role For example the role would look like [mail team] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 importRoles = user srchIndexesAllowed = mailgatewaylogs;maillogs;emailscanlogs srchMaxTime = 8640000 How do i specify users to have that have the mail team role ? user1:mail team user2:mail team user3:mail team Not been able to find any reference or example as to how best to set this configuration centrally. Thanks in advance  
Hello, We're looking to upgrade our universal forwarder to version 9.x and since I haven't found anything definitive yet I wanted to ask on here if there's any issues, I should be aware of with UF v... See more...
Hello, We're looking to upgrade our universal forwarder to version 9.x and since I haven't found anything definitive yet I wanted to ask on here if there's any issues, I should be aware of with UF v9.x and v. 8.2.7 indexers and forwarders. thanks
I have a Splunk app db connect running on version 3.4.2 so is it important to run those exsisting databases on that version ?
hai i have few services which are getting from process, how can i ingest those and filter in splunk  example :want  to monitor event_demon, as_server   ------------------------------------  ---... See more...
hai i have few services which are getting from process, how can i ingest those and filter in splunk  example :want  to monitor event_demon, as_server   ------------------------------------  -------  -------------- WAAE Agent (WA_AGENT)                   22036  running WAAE Scheduler (RDV)                    22258  running WAAE Application Server (RDV)           22158  running -sh-4.2$ ps -ef | grep -i 22258 autosys  22258     1  1 05:27 ?        00:00:04 event_demon -A RDV autosys  30384 29146  0 05:33 pts/0    00:00:00 grep --color=auto -i 22258 -sh-4.2$ -sh-4.2$ -sh-4.2$ ps -ef | grep -i 22158 autosys  22158     1  1 05:27 ?        00:00:08 as_server -A RDV autosys  31390 29146  0 05:35 pts/0    00:00:00 grep --color=auto -i 22158 -sh-4.2$
Hi Community, I am looking for Install APP from file option under APPs section in Splunk Cloud but I couldn't find the option. Why it is not available in Splunk Cloud? Is there any other way to ins... See more...
Hi Community, I am looking for Install APP from file option under APPs section in Splunk Cloud but I couldn't find the option. Why it is not available in Splunk Cloud? Is there any other way to install the APP in to Splunk cloud? Please suggest! Regards, Eshwar
I am able to see the Machine Agents - Infrastructure Metrics like CPU, RAM, etc data getting polled in Controller, But, the Metric explorer was producing data under Application. Whereas, Metric ... See more...
I am able to see the Machine Agents - Infrastructure Metrics like CPU, RAM, etc data getting polled in Controller, But, the Metric explorer was producing data under Application. Whereas, Metric Browser under Server was able to populate the graph. Need your support to fix this. ^ Post edited by @Ryan.Paredez to remove photos that had Controller URL visible. For security and privacy reasons, please be careful about sharing Controller name/URL. ^ Have update the images.
Hi guys, i want connect my multiselect City to multiselect Category. First i try with add token in the second multiselect (Category) but don't work :(. Any solutions? 
Hi All, i was displaying some data in the Heatmap Viz like   
Hi Splunkers, for our customer we collect log from Windows systems. The main configuration details are: Logs go from DCs to a dedicated HF and then to Splunk Cloud, so the flow is: DCs -> HF ->... See more...
Hi Splunkers, for our customer we collect log from Windows systems. The main configuration details are: Logs go from DCs to a dedicated HF and then to Splunk Cloud, so the flow is: DCs -> HF -> Splunk Cloud Due customer policy, we avoided UF and used the WMI Collection, so on HF we configured, as Data Input, the Remote event log Collection. Configuring Remote event log Collection, we put one DC hostname in box Collect logs from this host and then we added the remaining ones in the box to add additional hosts. I mean: with only one Remote event Collection data inputs, we are collection logs from alla DCs, and they are 12. We collect following data type: Application System Security DNS PoweShell Currently, we applied no blacklist and/or other filter mechanis, so we are collecting all logs from above category Our HF has the recommened system requirements. Yesterday we completed this configuration and started to collect logs. The issue we are facing is that logs arrive with a delay, which is always around 30-60 minutes. So, we have to understand why. Our suspect is that we have not a single root cause, but a set and it is: Use of WMI insted of forwarder, that could be problematic if we have multiple hosts, as stated in this Splunk Community Thread  Collection of all logs without filtering anything; I mean that for above categories we collect all related EventCode occurrences. A "burst" in sending logs, cause we started collection from all DCs in the same time Configured only one Remote event log Collection, but we think this have a minimum weight on performance issues. Based on this, if all our assumptions are correct, considering that customer for sure will not enable UF installation, we thougth to: Excluding unwanted logs with inputs.conf in HF Evaluate if create separate Remote event log Collection input, in worst case one for every DC. Do you think guys this is fine? Our main doubt currently is: have we detected all issue causes? Are our fixes the right ones?