All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

single column join is working     index=* source=jar columns.path="*/log4j-core*" NOT columns.path=*/log4j*2.17* host IN (*.test.com) | rename columns.pid AS pid, columns.pid_ts as pid_ts, col... See more...
single column join is working     index=* source=jar columns.path="*/log4j-core*" NOT columns.path=*/log4j*2.17* host IN (*.test.com) | rename columns.pid AS pid, columns.pid_ts as pid_ts, columns.path as path, | dedup host path pid | join pid type=left max=1 [search index=* source=process host IN (*.test.com) earliest=-25h latest=now | rename columns.pid AS pid, columns.cmdline as cmd, columns.username as user, columns.uid as uid, columns.groupname as group, columns.gid as gid | dedup host pid] | table host, path, pid, user, uid, group, gid, cmd     but multi column join is not working     index=* source=jar columns.path="*/log4j-core*" NOT columns.path=*/log4j*2.17* host IN (*.test.com) | rename columns.pid AS pid, columns.pid_ts as pid_ts, columns.path as path, | dedup host path pid | join host,pid type=left max=1 [search index=* source=process host IN (*.test.com) earliest=-25h latest=now | rename columns.pid AS pid, columns.cmdline as cmd, columns.username as user, columns.uid as uid, columns.groupname as group, columns.gid as gid | dedup host pid] | table host, path, pid, user, uid, group, gid, cmd       Splunk Enterprise Version:8.2.6Build:a6fe1ee8894b
Unable to find sourcetype="ms365:defender:incident:alerts" can u pls help 
Hey Guys, I have the following Event Data (Picture 1) that come into splunk via universal forwarder. I managed it to generate a table out of it (by using different operations like a delimiter by /n a... See more...
Hey Guys, I have the following Event Data (Picture 1) that come into splunk via universal forwarder. I managed it to generate a table out of it (by using different operations like a delimiter by /n and the statements of picture 2). My Question is: Can I define permanent interpreting rules for specific directories, so that the statements of picture 2 and the delimiter by /n are permanently apllicated for some specific directories?:)   Sincreley Leon    
Hi, I need your help i have a lookup table as vcs_ip.csv. inside the table, i have a column named as ip. This table is for all the allowed traffic. How to i construct a query to search for Dst_... See more...
Hi, I need your help i have a lookup table as vcs_ip.csv. inside the table, i have a column named as ip. This table is for all the allowed traffic. How to i construct a query to search for Dst_ip and Src_ip NOT found in the vcs_ip.csv under ip column
Hello The dates I have are in form of Week Starting: for example WeekStarting = 04/04/2022 , 11/04/2022 and so on. I am unable to group data where business now requires to see 3 months rolling avg ... See more...
Hello The dates I have are in form of Week Starting: for example WeekStarting = 04/04/2022 , 11/04/2022 and so on. I am unable to group data where business now requires to see 3 months rolling avg figures for the last 2 years. How can I achieve this?  My search: index=AB, source=AB | search (WeekStarting="2021*" OR WeekStarting="2022*") | chart avg(DeviceCount) by WeekStarting It should be visualized as a 3 month data analysis I am also trying to use timewrap  span =1 month by Device count but no statistics appear!! Please help
I don't have Enterprise Security FYI... Just Enterprise Search.  Appreciate your assistance in this matter...   Thanks
In the email alert configuration, i want to make certain texts in Bold and add hyper links on text message, instead of placing the links. I am choosing HTML & Plain Text option, but it won't take the... See more...
In the email alert configuration, i want to make certain texts in Bold and add hyper links on text message, instead of placing the links. I am choosing HTML & Plain Text option, but it won't take the tags and displays tags as it is in the email.  Thanks in ad
Hi AppDynamics My problem is that I cannot push traces through OpenTelemetry into AppDynamics. I was trying to use two different collectors, but error message was the same.  My license is Lite. ... See more...
Hi AppDynamics My problem is that I cannot push traces through OpenTelemetry into AppDynamics. I was trying to use two different collectors, but error message was the same.  My license is Lite. Errors for the specific collectors: docker image collector: otel/opentelemetry-collector-contrib-dev:latest otel-collector      | 2022-10-09T22:08:20.436Z  info    exporterhelper/queued_retry.go:215       Exporting failed. Will retry the request after interval.         {"kind": "exporter", "name": "otlphttp", "error": "error exporting items, request to https://fra-sls-agent-api.saas.appdynamics.com/v1/traces responded with HTTP Status Code 403", "interval": "44.029649035s"} docker image collector: appdynamics/appd-oc-otel-collector otel-collector      | 2022-10-09T22:24:01.307Z  info    exporterhelper/queued_retry.go:231       Exporting failed. Will retry the request after interval.         {"kind": "exporter", "name": "otlphttp", "error": "error exporting items, request to https://fra-sls-agent-api.saas.appdynamics.com/v1/traces responded with HTTP Status Code 403", "interval": "4.688570936s"}     Collector settings: My server lives on DigitalOcean, Amsterdam region. Do I need to move it to Frankfurt? Thanks
How would I modify this search : | tstats prestats=true summariesonly=true allow_old_summaries=true count from datamodel=Authentication.Authentication where Authentication.app=win* Authentication.ac... See more...
How would I modify this search : | tstats prestats=true summariesonly=true allow_old_summaries=true count from datamodel=Authentication.Authentication where Authentication.app=win* Authentication.action=$action$ by _time, Authentication.user span=10m | timechart minspan=10m useother=true count by Authentication.user limit=50 to exclude this IP Address: src_ip!="10.0.1.90"
How can I simultaneously read from a file by UF and send to the indexer in Splunk format and to another device in syslog format?    
I have forwarding the logs from the below directory. Below is the inputs.conf file [monitor:///u01/app/oracle/scripts/SplunkMonitoring/Log] disabled = false index = osb crcSalt = <SOURCE> [mon... See more...
I have forwarding the logs from the below directory. Below is the inputs.conf file [monitor:///u01/app/oracle/scripts/SplunkMonitoring/Log] disabled = false index = osb crcSalt = <SOURCE> [monitor:///u01/app/oracle/scripts/Logging/output] disabled = false index = osb =============   output  from the /u01/app/oracle/scripts/Logging/output is forwarding successfully but no logs were received for /u01/app/oracle/scripts/SplunkMonitoring/Log. Below is the splunkd.log file. 10-09-2022 21:31:41.925 +0800 INFO WatchedFile [214996 tailreader0] - Will begin reading at offset=0 for file='/u01/app/oracle/scripts/SplunkMonitoring/Log/ServerStatus.txt'. 10-09-2022 21:32:01.208 +0800 INFO AutoLoadBalancedConnectionStrategy [214989 TcpOutEloop] - Connected to idx=10.9.0.49:9997:0, pset=0, reuse=0. autoBatch=1 10-09-2022 21:32:05.125 +0800 INFO TailReader [214996 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 10-09-2022 21:32:31.053 +0800 INFO AutoLoadBalancedConnectionStrategy [214989 TcpOutEloop] - Connected to idx=10.9.0.49:9997:2, pset=0, reuse=0. autoBatch=1 10-09-2022 21:32:35.054 +0800 INFO TailReader [214996 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 10-09-2022 21:33:00.976 +0800 INFO AutoLoadBalancedConnectionStrategy [214989 TcpOutEloop] - Connected to idx=10.9.0.49:9997:1, pset=0, reuse=0. autoBatch=1 10-09-2022 21:33:07.976 +0800 INFO TailReader [214996 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 10-09-2022 21:33:21.924 +0800 INFO WatchedFile [214996 tailreader0] - Will begin reading at offset=0 for file='/u01/app/oracle/scripts/SplunkMonitoring/Log/jms_status.txt'. 10-09-2022 21:33:21.931 +0800 INFO WatchedFile [214996 tailreader0] - Will begin reading at offset=0 for file='/u01/app/oracle/scripts/SplunkMonitoring/Log/DataSourceStatus.txt'.
Hi support team, I have just registered an account, I have received an email for resetting the password. But I have not received the email regarding Controller information. Please help me check m... See more...
Hi support team, I have just registered an account, I have received an email for resetting the password. But I have not received the email regarding Controller information. Please help me check my account [email redacted] Thanks & Regards. ^ Post edited by @Ryan.Paredez to remove the email address. For privacy reasons, please do not share your email address on community posts. If you need to share it, please do so via Community Private Message. 
In many Splunk official Documentation we read sometimes, to "wipe" an instance, to launch the command   splunk clean all   OK. But, doing so, we reset also the passwd file, so from now on we ... See more...
In many Splunk official Documentation we read sometimes, to "wipe" an instance, to launch the command   splunk clean all   OK. But, doing so, we reset also the passwd file, so from now on we have no more access to Splunk instance, unless we did previously a backup restoring it after the "clean all" did its job. So, considering this aspect, this type of documentation seems very dangerous to me, without specifying this case. An example: i need to remove completely an instance from a SH Cluster 1) i follow the "clean all" Documentation, and i come in the case of an unuseful intsance 2) i follow, by myself, a "clean kvstore --cluster" or "clean kvstore --all", and the instance was there still running and operative, without the cluster db data registered So, do a "splunk clean all" should make by itself a backup of auth data for logging into the instance or reassign the original "changeme"?
Hello,   We have a huge setup and UFs are managed through Deployment server, All the UFs are at far places and managed and installed by their respective asset owners. lately some random clients s... See more...
Hello,   We have a huge setup and UFs are managed through Deployment server, All the UFs are at far places and managed and installed by their respective asset owners. lately some random clients stops phoning home to their client. Though they are disconnected they are sending logs which explains they are not stopped or any configuration changes happened. kindly help me out understanding what is the issue and how it can be handled.   Thank you.
Hello Splunk Experts, We had issue where several network devices not ingesting into SPlunk. Further checking with Splunk found that the logs from the router is making it to the rsyslog server but i... See more...
Hello Splunk Experts, We had issue where several network devices not ingesting into SPlunk. Further checking with Splunk found that the logs from the router is making it to the rsyslog server but it is NOT writing to disk. Need to check the rsyslog config to see why the data is not writing to disk.  Any suggestion how to check on this? Any example of rsyslog.conf that we can refer?  Thank in advance  
I'm working on a Splunk CSC and I've found it really helpful to output logs to the search log with:   print('Whatever I want', file=sys.stderr)   Which appears in the search log as: 10-07-2022 2... See more...
I'm working on a Splunk CSC and I've found it really helpful to output logs to the search log with:   print('Whatever I want', file=sys.stderr)   Which appears in the search log as: 10-07-2022 23:47:19.915 ERROR ChunkedExternProcessor [10374 ChunkedExternProcessorStderrLogger] - stderr: Whatever I want  But that's a very beefy (even misleading) preamble. So my question is: Can I control the output that gets displayed in the search log? I'm assuming there's some file handle somewhere I can write to and I would love to get a hold of it! It's obviously not sys.stdout because that's what the actual event data gets transmitted to. Thanks!
Did Splunk Inc just get rid of Maxmind's free iplocation database and replace it with a different free product (dbip-city-lite.mmdb)? Am I the only one who thinks the accuracy of the IP lookups has... See more...
Did Splunk Inc just get rid of Maxmind's free iplocation database and replace it with a different free product (dbip-city-lite.mmdb)? Am I the only one who thinks the accuracy of the IP lookups has gotten much worse with dbip-city-lite? If this new database is to be believed, all my IP6 connections are coming from Texas, New York or Illianois and I work in Chicago (I don't). Maxmind (free version that came with Splunk) was not 100%, but seemed to be much more accurate throughout the years I used it. Anyone agree or disagree?