All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a way to create a sort of catch-all base search/alert and then have customisable configurable parameters dependant on asset/app/host/criticality? If possible I'd rather not create 10s of ale... See more...
Is there a way to create a sort of catch-all base search/alert and then have customisable configurable parameters dependant on asset/app/host/criticality? If possible I'd rather not create 10s of alerts dependant on the various factors listed above but I'm not sure if there's a way to do it otherwise   e.g. 'too many login fails per X minutes' to have a configurable value for 'too many' depending on the asset / host /app details (i.e. we would like one use-case to cover "too many failed logins" (and other common cases) across lots of apps without copying the case over and over
Hey guys, How to catch=handle alert's results in another monitoring alert=rule? There's probably a way with Alet Manager, buy I hope there's a better\general method. Splunk Enterprise 8, Linux Red... See more...
Hey guys, How to catch=handle alert's results in another monitoring alert=rule? There's probably a way with Alet Manager, buy I hope there's a better\general method. Splunk Enterprise 8, Linux Red Hat\CentOS. Thanks in advance.
Hi Experts, I am looking for a method to Ingest only one column from csv and the rest of the columns should be ignored. This specific column we want can be present in any order in csv. Using Unive... See more...
Hi Experts, I am looking for a method to Ingest only one column from csv and the rest of the columns should be ignored. This specific column we want can be present in any order in csv. Using Universal Forwarder. Please help. Thank you.
I'm new to Splunk and just wanted to understand how we can create automatic Lookup. If I can get an example that would be great. 
Hi all, I have a Splunk Kafka connect which i installed from the github. I started the Kafka connect after i changed the config/connect-distributed.properties and edited the bootstrap.servers to t... See more...
Hi all, I have a Splunk Kafka connect which i installed from the github. I started the Kafka connect after i changed the config/connect-distributed.properties and edited the bootstrap.servers to the correct ones (A cluster of 3 Kafka servers). and then i added with a curl command a topic to monitor and the Splunk Kafka connect started throwing a exception of:     [2020-11-17 13:08:34,095] WARM [Producer cliendId=producer-3] Got error produce response with correlation id 410 on topic-partiation _kafka-connect-splunk-task-config-0, retrying (214748341 attempts left). Error : NOT_ENOUGH_REPLICAS (org.apach.kafka.clients.producer.internals.Sender:525)       the relevant part of the file config/connect-distributed.properties is:     group.id=kafka-connect-splunk-hec-sink config.storage.topic=_kafka-connect-splunk-task-configs config.storage.replication.factor=3 offset.storage.topic=_kfaka-connect-splunk-offsets offset.storage.replication.factor=3 offset.storage.partitions=25 status.storage.topic=_kafka-connect-splunk-statuses status.storage.replication.factor=3 status.storage.partitions=5     And the curl looks like:     curl localhost:8083/connectors -X POST -H "Content-Type: application/json" -d '{ "name": "kafka-connect-splunk", "config": { "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector", "tasks.max": "3", "topics":"Topic_name", "splunk.indexes": "", "splunk.hec.uri": "https://splunk:8088", "splunk.hec.token": "Token", "splunk.hec.raw": "true", "splunk.hec.raw.line.breaker": "", "splunk.hec.ack.enabled": "true", "splunk.hec.ack.poll.interval": "10", "splunk.hec.ack.poll.threads": "2", "splunk.hec.ssl.validate.certs": "false", "splunk.hec.http.keepalive": "true", "splunk.hec.max.http.connection.per.channel": "4", "splunk.hec.total.channels": "8", "splunk.hec.max.batch.size": "1000000", "splunk.hec.threads": "2", "splunk.hec.event.timeout": "300", "splunk.hec.socket.timeout": "120", "splunk.hec.track.data": "true" } }'       Would like to get any assistance to what to check? and what is the best practice for Splunk Kafka connect configuration? Thank you in advance, John
Hi I use the search below in order to display a pie chart and to change the label of each pie slice     `CPU` | fields process_cpu_used_percent host process_name | eval process_name=case(proce... See more...
Hi I use the search below in order to display a pie chart and to change the label of each pie slice     `CPU` | fields process_cpu_used_percent host process_name | eval process_name=case(process_name like "mfev%" OR process_name like "mcdatrep" OR process_name=="mcshield" OR process_name=="amupdate" OR process_name=="McScript_InUse" OR process_name=="macompatsvc" OR process_name=="FrameworkService" OR process_name=="McScanCheck", "McAFEE", process_name like "Wmi%", "WMI") | stats count by process_name    By clicking on a pie slice, I open a drilldown in order to display the events related to the pie slice  So I have added the advanced parameters    process_name = $click.value$ host = $tok_filterhost$   What is strange is that when I click on the "WMI" pie slice, I can display events in the drillwon but when I click on the "McAFEE" pie slice, I am not able to display events What is wrong please??
Hello, we are very new to Splunk. At the moment we try the palo alto app and missed some inputs on dashboards for search and filter. Whats the best way to extend this app but do it so, that after a... See more...
Hello, we are very new to Splunk. At the moment we try the palo alto app and missed some inputs on dashboards for search and filter. Whats the best way to extend this app but do it so, that after an upgrade of the app, the changes will not be lost? Is it ok to copy the view from default to local folder and do the changes ? Will it persist after an upgrade of the app?
Hi All, I have a requirement I wanted to check which user is running a search. I need help in SPL query to get user and search details.
Hi All, I have one server in which DMC is configured, now my manager wanted to decommission this server and migrate  DMC to another server. So is it possible to Migrate DMC to another server? If i... See more...
Hi All, I have one server in which DMC is configured, now my manager wanted to decommission this server and migrate  DMC to another server. So is it possible to Migrate DMC to another server? If it is possible so do I need to reconfigure or how it will work. 
Hi All I'm new to Splunk and I'm confused between stats eventstats and streamstats. Can anyone help me to understand?
I have a String is in the pattern: [substring1][substring2][substring3] Spark App State changed to FAILED. Total time taken is 10 minutes I want to extract it into 4 fileds: Field1 = substring1 F... See more...
I have a String is in the pattern: [substring1][substring2][substring3] Spark App State changed to FAILED. Total time taken is 10 minutes I want to extract it into 4 fileds: Field1 = substring1 Field2 = substring2 Field3 = substring3 Field4 = 10  Please help.
Hello i'm wondering if it is possible to use rex command with datamodel without declaring attributes for every rex field i want (i have lots of them )   thanks
We're trying to do: Collect Event Log by REST input on Splunk Enterprise 8.1 --> HF (v8.1 on Windows) --> external Syslog destination. The logs forwarded from splunk are available on the syslog ser... See more...
We're trying to do: Collect Event Log by REST input on Splunk Enterprise 8.1 --> HF (v8.1 on Windows) --> external Syslog destination. The logs forwarded from splunk are available on the syslog server and contain the logs we need, but they also contain many audit logs from splunk itself. No matter how much we modify output.conf we cannot change this. What do we need to configure in order to filter the audit logs of splunk itself? Thanks. Here is the config: C:\Program Files\Splunk\etc\apps\SplunkForwarder\default\ outputs.conf [syslog] defaultGroup = vco_event_group priority = NO_PRI syslogSourceType = sourcetype::vco_event_log [syslog:vco_event_group] server = 172.16.36.251:5140 props.conf [vco_event_log] TRANSFORMS-vco_event_log = vco_to_syslog transforms.conf [vco_to_syslog] DEST_KEY = MetaData:Sourcetype REGEX = vco_event_log FORMAT = vco_event_group Audit log on Syslog Server Syslog   log we needed   event info. Event Info    
I have two different lookups and I need to compare and find the missing employee names. If present in one lookup it should display yes and if not present it should display no. Here is an example.  ... See more...
I have two different lookups and I need to compare and find the missing employee names. If present in one lookup it should display yes and if not present it should display no. Here is an example.  Lookup 1: Sam Sheila James Lookup 2:  Sam Sheila James Tom Then create two columns and display yes or no if name is present or not.  Employee Names Lookup 1 Lookup 2 Sam YES Yes Sheila YES Yes James YES Yes Tom No Yes
We've been seeing this behavior since a recent reinstall of 4.5.10 where, after a day or two, the Machine Agent stops reporting metrics even thought availability looks ok.  Investigation found errors... See more...
We've been seeing this behavior since a recent reinstall of 4.5.10 where, after a day or two, the Machine Agent stops reporting metrics even thought availability looks ok.  Investigation found errors in the logs complaining about having too many open files when trying to run extensions/scripts.  Looking at open file descriptors for the process, it appears that it is leaking file handles for its own logs.  I'm curious if anyone else has seen similar behavior recently. From a test machine:  root@mm03:/proc/23667/fd# ps -ef | grep appd root 11569 806 0 22:30 pts/0 00:00:00 grep --color=auto appd root 23667 1 7 Nov18 ? 03:46:54 /opt/appdynamics/machine-agent//jre/bin/java -Dlog4j.configuration=file:/opt/appdynamics/machine-agent/conf/logging/log4j.xml -jar /opt/appdynamics/machine-agent/machineagent.jar root@mm03:/proc/23667/fd# ls -l /proc/23667/fd | wc -l 4097 root@mm03:/proc/23667/fd# ls -l /proc/23667/fd | grep "machine-agent.log" | wc -l 3922
Hi Everyone, Basically, we have an indexer cluster where multiple search head clusters are connected. I do not know the exact term but I would like to see the performance/usage of each shcluster. ... See more...
Hi Everyone, Basically, we have an indexer cluster where multiple search head clusters are connected. I do not know the exact term but I would like to see the performance/usage of each shcluster. The only place I am able to see all the search head is connected to the cluster master where I have access to see the details. I do not have any other details in my DMC which related to other shclusters. Thanks, Purush
There are two sourcetypes , sourcetype=A  sourcetype=B  and we have extracted a field "login" in both sourcetypes 1. we need to have a "count"  of the login values which are available in sourcety... See more...
There are two sourcetypes , sourcetype=A  sourcetype=B  and we have extracted a field "login" in both sourcetypes 1. we need to have a "count"  of the login values which are available in sourcetype=A but not in sourcetype=B 2. we need to have a "list of values"  of the login values which are available in sourcetype=A but not in sourcetype=B 3. Any Graph that we can show the these many "login" are missing in compare with sourcetypes using timechart? - any suggestions?
I have a couple of heavy Forwarders that we've been using for a while without a deployment server, now we want to use a DS to manage their Apps and make sure they are consistent, but it seems the ori... See more...
I have a couple of heavy Forwarders that we've been using for a while without a deployment server, now we want to use a DS to manage their Apps and make sure they are consistent, but it seems the original installation was a clone or a copy of the splunk folder so both instances have the same GUID (Instance ID) The Deployment Server is noticing this: WARN ClientSessionsManager - Client with Id 'F8857965-300D-4E42-AECA-D35597DC4441' has changed some of its properties on the latest phone home.Old properties are: {ip=38.X.X.X, dns=FQDN, hostname=XXXCHSLKHF01, deploymentClientName="XXXCHSLKHF01", connectionId=connection_38.x.x.x.x_8089_38X.X.X_XXXCHSLKHF01_XXXCHSLKHF01, utsname="linux-x86_64", build=7af3758d0d5e, mgmt=8089, splunkVersion=7.3.3, package=enterprise, instanceId=F8857965-300D-4E42-AECA-D35597DC4441, instanceName=XXXCHSLKHF01}. New properties are: {ip=38.X.X.X, dns=38.130.118.2, hostname=XXXMNSLKHF01, deploymentClientName="F8857965-300D-4E42-AECA-D35597DC4441", connectionId=connection_38.X.X.X_8089_38.X.X.X_XXXMNSLKHF01_F8857965-300D-4E42-AECA-D35597DC4441, utsname="linux-x86_64", build=7af3758d0d5e, mgmt=8089, splunkVersion=7.3.3, package=enterprise, instanceId=F8857965-300D-4E42-AECA-D35597DC4441, instanceName=XXXMNSHF}. So the DS will replace one HF with the other every time one calls back. How can I change this Instance ID?
Is it possible to not ingest logs related to a specific RemoteHostName? I have tried: [WinNetMon://winnetmon] ... blacklist = RemoteHostName = "server1.company.org", "server2.company.org"
Hoping someone can help, reasonably new to Splunk. I have a number of Splunk events that are uploaded small text files. Is there a way I can search inside these uploaded files explicitly? In my cas... See more...
Hoping someone can help, reasonably new to Splunk. I have a number of Splunk events that are uploaded small text files. Is there a way I can search inside these uploaded files explicitly? In my case they are transaction files that may be for sales or refunds.  So for example I'd like to search though all transaction files for today to get metrics for all sales and refunds. The in advance.