All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone! I'm using the Splunk OpenTelemetry collector to send logs from k8s to Splunk through HEC input. It's running as DaemonSet. The collector is deployed via Helm Chart: https://github.c... See more...
Hello everyone! I'm using the Splunk OpenTelemetry collector to send logs from k8s to Splunk through HEC input. It's running as DaemonSet. The collector is deployed via Helm Chart: https://github.com/signalfx/splunk-otel-collector-chart I would like to exclude logs with specific string, for example: "Connection reset by peer", but cannot find the configuration that would be able to do that. It looks like the processors can do that:  https://opentelemetry.io/docs/collector/configuration/#processors And also there is a default configuration for opentelemetry in the chart, but I cannot understand how to add filter to it: https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/templates/config/_otel-collector.tpl#L35 Has anyone encountered such issue or do you have any advices for this case?
In my search i have 2 rows, column specifying the week and the other column a multi-value field of EventIDs. I need to compare one row multievalue field to another row multi value field. And Output t... See more...
In my search i have 2 rows, column specifying the week and the other column a multi-value field of EventIDs. I need to compare one row multievalue field to another row multi value field. And Output the different event values from these 2 rows. ______________________________ my search | stats values(EventID) by week Result: week values(EventID) This week 4624 4625 4627 4634 4647 4648 4656 4658 4661 4663 4664 4670 4672 4673 4674 4688 4689 4690 4692 4693 4698 Previous Week 4624 4625 4627 4634 4647 4648 4656 4658 4661 4663 4664 4670 4672 4673 4674 4688 4689 4690 4692 4693 4698 4702 4720 4722 4724 4725   Now i desire to compare "this week" event values with "previous week" event values and table values not seen in both weeks. Any suggestions is much appreciated, thanks!
Hello All,I am looking for options to integrate social media platforms like twitter or facebook with splunk so that activities using the official handles of the customer can be monitored. I wasnt ab... See more...
Hello All,I am looking for options to integrate social media platforms like twitter or facebook with splunk so that activities using the official handles of the customer can be monitored. I wasnt able to find anything except some posts for twitter. I tried to follow them and create a REST api in splunk using the /developer.twitter.com portal. I created the client key and secret and oauth1 key and token.But I am not able to see any logs. Note that the Shs do not have access to open internet.Can anyone suggest any options that can be used for free. Thanks in advance.
Good Afternoon, Some brief background for the longest time we have been using Splunk as a Standalone Indexer and search head combined. In its infancy it was on a physical server and it worked gre... See more...
Good Afternoon, Some brief background for the longest time we have been using Splunk as a Standalone Indexer and search head combined. In its infancy it was on a physical server and it worked great. Then Virtualised environments came along and it was transfered into VMware. Since then its been pretty bad, We have had all manner of "consultants" try and figure out why the searching is so slow but nobody really can figure it out. The actual question I have come full circle and have managed to source a brand new HP DL360 Gen10 but I wondered should i be using this for the indexing or for the search head?  
Hi All, I need a regex that can extract particular bits from proxy events equally e.g. there are different types of events with similar KVs and I am looking for a unified rex that can work for each ... See more...
Hi All, I need a regex that can extract particular bits from proxy events equally e.g. there are different types of events with similar KVs and I am looking for a unified rex that can work for each individual event and extract the following: | rex mode=sed "s/(?<cip>c-ip=\S+)\s.*(?<csbytes>cs-bytes=\S+)\s.*(?<cscategories>cs-categories=\S+)\s.*(?<cshost>cs-host=\S+)\s.*(?<csip>cs-ip=\S+)\s.*(?<csmethod>cs-method=\S+)\s.*(?<csuriport>cs-uri-port=\S+)\s.*(?<csurischeme>cs-uri-scheme=\S+)\s.*(?<csusername>cs-username=\S+)\s.*(?<action>s-action=\S+)\s.*(?<sip>s-ip=\S+)\s.*(?<scbytes>sc-bytes=\S+)\s.*(?<status>sc-status=\S+)\s.*(?<timetaken>time-taken=\S+)\s.*(?<url>c-url=\S+)\s.*(?<csreferer>cs-Referer=\S+)\s.*(?<rip>r-ip=\S+)\s.*(?<ssourceport>s-source-port=\S+)\s.*/\1 \2 \3 \4 \5 \6 \7 \8 \9 \10 \11 \12 \13 \14 \15 \16 \17 \18/g" Events of Action=Allowed work to some extent but as soon as any of the fields is missing for example cs-Referer, the event does not get stipped as expected and it ignores the regex. Any help is much appreciated. Thank you!
How can I get a list of disabled or enabled correlation searches in last 7 days? As of now, I have a query to fetch the full list of all the correlation searches with disabled and enabled status. Bu... See more...
How can I get a list of disabled or enabled correlation searches in last 7 days? As of now, I have a query to fetch the full list of all the correlation searches with disabled and enabled status. But I'm unable to fetch the correlation search list which are enabled or disabled in last 7 days. Please help me out.
Hello, how can I split strings that are in the same line without delimiters into a new line? Have this lines that contains strings in the same line, would be possible to split them for a new ... See more...
Hello, how can I split strings that are in the same line without delimiters into a new line? Have this lines that contains strings in the same line, would be possible to split them for a new line?
Hi,  The application is logging the below as error but this is the known bug from the development. So to keep the BT health as good, suggested to ignore this error message.  com.xx.xx.xx.xxxx : Una... See more...
Hi,  The application is logging the below as error but this is the known bug from the development. So to keep the BT health as good, suggested to ignore this error message.  com.xx.xx.xx.xxxx : Unable to sync user with IdentifyingName="admin": Username "admin" is reserved for built-in users I am getting error when I add this error to ignore configuration such as, "Fields cannot contain HTML or special characters." How can I add the complete message in it?
I realise this is possible sacrilegious, but here goes. We're looking to move all our Splunk stuff over to Google Chronicle. We used an app on Splunk cloud that is no longer supported, so this needs... See more...
I realise this is possible sacrilegious, but here goes. We're looking to move all our Splunk stuff over to Google Chronicle. We used an app on Splunk cloud that is no longer supported, so this needs to be done as quickly as possible. Eventually we will replace the Splunk forwarders with an agnostic solution, but until then I figure we might just be able to use the current heavy forwarder and direct it to the Chronicle instance. Is it as simple as this sounds, or am I really oversimplifying?
Hi Team, I wanted to forward my logs from heavy forwarder to Splunk Cloud and the same logs should forward to my test indexers as well. Now the configurations exists to forward the logs to Splunk c... See more...
Hi Team, I wanted to forward my logs from heavy forwarder to Splunk Cloud and the same logs should forward to my test indexers as well. Now the configurations exists to forward the logs to Splunk cloud indexers and how to configure the same logs to forward to test indexers. Where do we configure the outputs.conf and how do we  configure. Is there a possibility  for this type of requirement.  Please do let us know.   Thanks & Regards, Umesh
hai all, i am using below splunk search to know the status if not running  but its not giving if process was not running. sourcetype=ps host=test1 COMMAND=*event_demon* | stats latest(cpu_load_pe... See more...
hai all, i am using below splunk search to know the status if not running  but its not giving if process was not running. sourcetype=ps host=test1 COMMAND=*event_demon* | stats latest(cpu_load_percent) as "CPU %", latest(PercentMemory) as "MEM %", latest(RSZ_KB) as "Resident Memory (KB)", latest(VSZ_KB) as "Virtual Memory (KB)" by _time | eval Process_Status = case(isnotnull('CPU %') AND isnotnull('MEM %'), "Running", isnull('CPU %') AND isnull('MEM %'), "Not Running", 1=1, "Unknown") | table "CPU %", "MEM %", "Resident Memory (KB)", "Virtual Memory (KB)", Process_Status | eval Process_Status = coalesce(Process_Status, "Unknown") | rename "CPU %" as "CPU %", "MEM %" as "MEM %" | fillnull value="N/A"
Hi All, We have installed splunk ITSI 4.15.0 on search head clusters. We are facing challenges in creating episodes and we are seeing the below error: ERROR [itsi_re(reId=Tksg,reMode=Preview)] [mai... See more...
Hi All, We have installed splunk ITSI 4.15.0 on search head clusters. We are facing challenges in creating episodes and we are seeing the below error: ERROR [itsi_re(reId=Tksg,reMode=Preview)] [main] CommonUtils:331 - FunctionName=isAnyClusterInRollingRestartOrUpgrade, Status=Failed, ErrorMessage="Skipping cluster rolling restart status check. Unable to get cluster config due to exception calling REST endpoint" on our search head servers. Also ITSI Analytics Monitoring dashboard shows "The number of Rules Engine Processes as zero". We checked the cluster status and there is no issue with the clustering. Can anyone please suggest how to resolve this issue.
Hi Team, need your help, while i am ingesting data using python script i.e scripted input. for timestamp field i am getting none value . even in script data is populating fine but when it is ingesti... See more...
Hi Team, need your help, while i am ingesting data using python script i.e scripted input. for timestamp field i am getting none value . even in script data is populating fine but when it is ingesting in splunk it is getting extra field value none for timestamp   need your help
Hi team, I have a question related to Deployment server Connection to License master. My deployment server is connected to License master and we changed the licnese master and we didnt change the m... See more...
Hi team, I have a question related to Deployment server Connection to License master. My deployment server is connected to License master and we changed the licnese master and we didnt change the master uri for Deployment server and the last time it got connected to license master was 23 months ago.  As of now i didnot see any impact observed. What happens if we dont connect our deployment server to license master. Is there is any impact. Please do let me know . I also wanted to know if we can change the master URI address using Splunk GUI. it is currently configured with different Master Uri . Please do let me know on this.   Thanks & Regards, Umesh 
Hi i need extract the below file name from extracted output    MDTM|07/02/2023 23:58:59.007|[SFTP:3460819_0:eftpos:10.18.168.158] READ: *MDTM /eftpos/prod/AR-100-01_20230702_PAY.zip 16883063270   ... See more...
Hi i need extract the below file name from extracted output    MDTM|07/02/2023 23:58:59.007|[SFTP:3460819_0:eftpos:10.18.168.158] READ: *MDTM /eftpos/prod/AR-100-01_20230702_PAY.zip 16883063270   file name :- AR-100-01_20230702_PAY.zip   i need extract the above file name using rex command
Hi, THe use case is GitHub Dependabot vulnerability alerts, once recevied, searching another index with GitHub SBOM listing packages and versions, to see what version we have. i.e. This search work... See more...
Hi, THe use case is GitHub Dependabot vulnerability alerts, once recevied, searching another index with GitHub SBOM listing packages and versions, to see what version we have. i.e. This search works great to return on result, when scoped right so to essentially one event in each data source. index=github_vulnerabilities source="office-sites" security_vulnerability.package.name=semver-regex earliest=-26h latest=now | rename source AS repository security_vulnerability.package.name AS name | table repository name security_vulnerability.first_patched_version.identifier | append [search index="github" repository="office-sites" name=semver-regex earliest=-3d latest=now | table name version] | selfjoin name I'd like to loosen the top to remove the specific package at least, and then the append, to just be the index so that it returns all repos that have the affected package name. I've tried |join with max=0 and joining on one or two fields but I couldn't get it to come out how I expected/wanted.
From Monitoring Console > Indexing > SmartStore > SmartStore Cache Performance: Deployment, we can see Max Cache Size value. | REST splunk_server_group=* /services/properties/server/cachemanager/max... See more...
From Monitoring Console > Indexing > SmartStore > SmartStore Cache Performance: Deployment, we can see Max Cache Size value. | REST splunk_server_group=* /services/properties/server/cachemanager/max_cache_size | eval size=if(value=0, "No Max", tostring(value, "commas")."MB") | fields size But this query will return results of all splunk servers. I want to show custom values of indexers only, but there is no option to select indexers. Is there anyway we can customise it within MC. TKs
Hi all, Splunk UF since 9.x is setting  [Service] NoNewPrivileges=yes AmbientCapabilities=CAP_DAC_READ_SEARCH in systemd unit file (/etc/systemd/system/SplunkForwarder.service). This enables splun... See more...
Hi all, Splunk UF since 9.x is setting  [Service] NoNewPrivileges=yes AmbientCapabilities=CAP_DAC_READ_SEARCH in systemd unit file (/etc/systemd/system/SplunkForwarder.service). This enables splunkforwarder to bypass Filesystems permissions and acls and read every file on harddisk - yes, every file: every ssh key, every private key, confidential data.. the opposite of the "least-to-know" principle.  As we have correct filesystem permissions in place we decided to remove those settings from systemd unit file. When we now run e.g.: "/opt/splunkforwarder/bin/splunk stop" command the systemd file is rewritten by the splunk command. This will start splunkforwarder with enabled CAP_DAC_READ_SEARCH capability. To make is more visual we uploaded a video to https://asciinema.org/a/FAYFPJYrKaizfL3alzvm3uNGF .  Are you able to reproduce the issue? What do you think? For us this looks like a secuity issue, as we would never expect a command like "splunk stop" manipulate systemd files. I'm also not aware which other command might rewrite the systemd unit. I also do not seed any usecase for this.  steps to reproduce: install-splunkuf.sh #!/bin/bash # break if errors set -e # add system user sudo groupadd splunk sudo useradd splunk --system --home-dir /opt/splunk --create-home -g splunk wget -O /tmp/splunkuf.tgz https://download.splunk.com/products/universalforwarder/releases/9.1.0/linux/splunkforwarder-9.1.0-1c86ca0bacc3-Linux-x86_64.tgz #wget -O /tmp/splunkuf.tgz https://download.splunk.com/products/universalforwarder/releases/9.0.5/linux/splunkforwarder-9.0.5-e9494146ae5c-Linux-armv8.tgz tar zxfv /tmp/splunkuf.tgz -C /opt echo -e "[user_info]\nUSERNAME=admin\nPASSWORD=Password01" > /opt/splunkforwarder/etc/system/local/user-seed.conf /opt/splunkforwarder/bin/splunk start --accept-license && /opt/splunkforwarder/bin/splunk stop -f /opt/splunkforwarder/bin/splunk enable boot-start -user splunk -group splunk -systemd-managed 1 # remove capabilities from systemd service sed -i '/^NoNewPrivileges\|^AmbientCapabilities/s/^/#/' /etc/systemd/system/SplunkForwarder.service systemctl daemon-reload systemctl start SplunkForwarder systemctl status SplunkForwarder # systemd file is still fine echo -n "systemd unit file after starting splunk" cat /etc/systemd/system/SplunkForwarder.service pid=$(systemctl show -p MainPID --value SplunkForwarder.service) && getpcaps $pid   when you now run    /opt/splunkforwarder/bin/splunk stop cat /etc/systemd/system/SplunkForwarder.service you see that lines NoNewPrivileges=yes AmbientCapabilities=CAP_DAC_READ_SEARCH are re-added to /etc/systemd/system/SplunkForwarder.service and next time the service is started caps are set. A backup file is also placed /etc/systemd/system/SplunkForwarder.service_TIMESTAMP. when running a strace strace -s 0 -o /tmp/910stop.strace -f /opt/splunkforwarder/bin/splunk stop we clearly see the splunk process manipulating the systemd file. 2120 rename("/etc/systemd/system/SplunkForwarder.service", "/etc/systemd/system/SplunkForwarder.service_2023_07_03_21_47_00") = 0 2120 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7feb05354f10) = 2122 2120 wait4(2122, 2122 set_robust_list(0x7feb05354f20, 24) = 0   This happens on all 9.x versions of UF.  best regards, Andreas
Hello @Claudia.Landivar  Due to your experience, I have read your cases, you have mainframe issue tracking, can you help me, please? we are doing a mainframe install but we can't have the visibil... See more...
Hello @Claudia.Landivar  Due to your experience, I have read your cases, you have mainframe issue tracking, can you help me, please? we are doing a mainframe install but we can't have the visibility of all apps, we also don't see BT in the app i share: without BT I have reviewed the documentation but it does not come concrete, has anyone experienced this case? I would appreciate your valuable help, please.
Hello, The last ask about this topic I saw was on the forum in 2018, a lot can change since that time. Silent/unattended install for AppDynamics Agent (Netviz, DB, APM, Machine agent) for both .N... See more...
Hello, The last ask about this topic I saw was on the forum in 2018, a lot can change since that time. Silent/unattended install for AppDynamics Agent (Netviz, DB, APM, Machine agent) for both .NET and Java.   We want to put the agents on new server builds whereas we do not want to configure them at this present time but want it to sit dormant until the application owners are ready to go live with utilizing AppDynamics. ^ Edited by @Ryan.Paredez for minor formatting