All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Summary: On a CentOS Stream 9 system, after installing Splunk in /opt/splunk and configuring it to start on boot with systemd, I've noticed unusual behavior. Using manual Splunk commands (/opt/splun... See more...
Summary: On a CentOS Stream 9 system, after installing Splunk in /opt/splunk and configuring it to start on boot with systemd, I've noticed unusual behavior. Using manual Splunk commands (/opt/splunk/bin/splunk [start | stop | restart]) alters the Splunkd.service file in /etc/systemd/system/, creating a timestamped backup. This change prevents Splunk from starting using systemctl commands and consequently on boot, defeating the purpose of the systemd setup. Using chattr to make the service file immutable is a current workaround. This behavior seems specific to CentOS Stream 9. How to recreate issue: On a centos stream 9 machine, installed splunk under /opt/splunk, and run splunk as user 'splunk'. Enable boot-start with systemd-managed 1, after stopping Splunk. After enabling boot-start, a file will be created at /etc/systemd/system/Splunkd.service. Starting and stopping splunk using systemctl works fine, and normal. However, if you run sudo /opt/splunk/bin/splunk [start | stop | restart], splunk itself will change the/etc/systemd/system/Splunkd.service, and create a backup with a timestamp, e.g. Splunkd.service_2023_09_21_06_49_05. When trying to start with systemctl again: e.g. sudo systemctl start Splunkd     Failed to start Splunkd.service: Unit Splunkd.service failed to load properly, please adjust/correct and reload service manager: Device or resource busy See system logs and 'systemctl status Splunkd.service' for details.     This will lead to Splunk not starting after reboot, which is the whole point of enabling systemd.   This error message shows up, because the Splunkd.service file has been altered. To get systemctl working again, i run sudo systemctl daemon-reload But as soon as one tries to do a manual start|stop|restart command, the same issue arises.   When diffing the new service file and old service file: diff Splunkd.service Splunkd.service_2023_09_21_06_49_05     26c26 < MemoryLimit=3723374592 --- > MemoryLimit=3723378688     memoryLimit is the only value that is changed for each subsequent 'backup' of the service file. It just switches between these two values   Mr chat.gpt suggested to make the service file non-immutable with sudo chattr +i /etc/systemd/system/Splunkd.service After this change, whenever doing manual start | stop | restart, you get a WARNING message: But it won't **bleep** up your Service file, and hence splunk will start after reboot.  So it is Splunk itself who is changing the Service file. However, this issue was discovered in Centos Stream 9, and cannot be replicated in earlier versions. Anybody know what may have caused this weird error?
HI,  I am trying to learn more about the certificates found within the document /etc/auth/appsCA.pem . I'm referring to Splunk's default certificates, Global Sign Root CA, Global Sign ECC, Digi... See more...
HI,  I am trying to learn more about the certificates found within the document /etc/auth/appsCA.pem . I'm referring to Splunk's default certificates, Global Sign Root CA, Global Sign ECC, DigiCert Global Root, ISRG Root, IdenTrust Commercial Root. Are they safe? After changing the certificate configuration with my self-signed certificates and merging the CA Splunk certificates to make Splunkbase work properly ( This case here ), I wondered if they are all necessary for Splunk to work successfully or only some of them. Is there a documentation page or can someone explain the use of each of the certificates? Thanks in advance, 
Hi all, I have migrated a 9.0.4 HF from a Windows Server 2012 to a Window server 2022. The original connector was working fine, while the new one (with the same settings) keeps crashing. This is the... See more...
Hi all, I have migrated a 9.0.4 HF from a Windows Server 2012 to a Window server 2022. The original connector was working fine, while the new one (with the same settings) keeps crashing. This is the error I got almors every minute on Application event viewer: Faulting application name: splunk-winevtlog.exe, version: 2305.256.25832.56887, time stamp: 0x64e8dfcc Faulting module name: ntdll.dll, version: 10.0.20348.1970, time stamp: 0x31881ea2 Exception code: 0xc0000374 Fault offset: 0x0000000000104909 Faulting process id: 0x1304 Faulting application start time: 0x01d9ed2bd5be870c Faulting application path: C:\Program Files\Splunk\bin\splunk-winevtlog.exe Faulting module path: C:\Windows\SYSTEM32\ntdll.dll Report Id: 45c2b6fd-2c6e-484d-9602-eb948052101d Faulting package full name: Faulting package-relative application ID:   I tried to upgrade the HF to version 9.0.6 and then to version 9.1.1 but the error persist. It seems to be caused by the inputs configured on Splunk_TA_windows (version 8.7.0 installed). This is the enabled inputs that cause the issue: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist3 = 4656,4658,4690,5031,5140,5150,5151,5154,5155,5156,5157,5158,5159 renderXml = false index = wineventlog ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 ## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false. renderXml = true host = WinEventLogForwardHost index = wineventlog   The only solution I found is to disable the ForwardedEvents input. This way the HF works as expected. I also tried to set current_only=1 on that input with no luck. Does anyone knows if it's a know issue and how to troubleshoot this? Regards Alessandro
Recently I have created a KVlookup using transforms .conf and collection.conf to get data from API. When KVlookup is manually opened, we can see the data but when searching on search head using inp... See more...
Recently I have created a KVlookup using transforms .conf and collection.conf to get data from API. When KVlookup is manually opened, we can see the data but when searching on search head using inputlookup, it doesn't show anything. Permissions has also been granted. Is there anything else, which we are missing?
I want to automate the uploading of a lookup file but at first I have to upload it to staging area. The staging area which is a public area or only admin can access it in an organization. So is there... See more...
I want to automate the uploading of a lookup file but at first I have to upload it to staging area. The staging area which is a public area or only admin can access it in an organization. So is there any api or anyone can help please so that I can automate this looup uploading directly from a user designated directory. I tried but it gives me this output ERROR:root:[failed] file: 'prices.csv', status:503, reason:Service Unavailable, url:http://localhost:8000/services/data/lookup-table-files/
How can I send email alert for data from multiple panels present in a dashboard as a single csv file having separate tabs for each panel
As the title suggests, i am trying to onboard multiple data sources in Splunk UBA. I would like to see if there is a way i can see from the CLI on the EPS per each data source ingested. The EPS  f... See more...
As the title suggests, i am trying to onboard multiple data sources in Splunk UBA. I would like to see if there is a way i can see from the CLI on the EPS per each data source ingested. The EPS  fluctuates when we run data sources to ingest data from Specific time window even if the data exists within Splunk Enterprise as compared to AllTime. 
Hi Team I am new to Splunk and looking for a way to Fetch few metrics data from Splunk using Splunk REST API. Can you please help me with right approach to implement the same? I have explored ... See more...
Hi Team I am new to Splunk and looking for a way to Fetch few metrics data from Splunk using Splunk REST API. Can you please help me with right approach to implement the same? I have explored 2 ways so far: 1. Using Search Job endpoints (Trying out this way is in-progress) 2. Using search queries within scripts If you can help with pro's n cons of above methods, also if any other way which will be appropriate one, it would be helpful. I am looking for best way to develop these APIs, so can get the result stored into files\database. Any inputs would be really appreciated. Thanks in advance! Akshada  
Hello i already installed UF in Windows Server 2016 but I get the error in Splunkd 09-22-2023 10:19:01.204 +0700 ERROR ExecProcessor [4720 ExecProcessor] - message from ""C:\Program Files\SplunkUniv... See more...
Hello i already installed UF in Windows Server 2016 but I get the error in Splunkd 09-22-2023 10:19:01.204 +0700 ERROR ExecProcessor [4720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventLogChannel::init: Failed to bind to DC, dc_bind_time=0 msec 09-22-2023 10:19:01.204 +0700 ERROR ExecProcessor [4720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventLogChannel::subscribeToEvtChannel: Could not subscribe to Windows Event Log channel 'Microsoft-Windows-Sysmon/Operational' 09-22-2023 10:19:01.204 +0700 ERROR ExecProcessor [4720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventLogChannel::init: Init failed, unable to subscribe to Windows Event Log channel 'Microsoft-Windows-Sysmon/Operational': errorCode=5 can anyone help me regarding this issue?  
Hi,  We have a requirement to migrate ITSI Content packs to Splunk Cloud. Is it possible to achieve this? If yes, Could you please help with the list of steps to perform for this? I would also wan... See more...
Hi,  We have a requirement to migrate ITSI Content packs to Splunk Cloud. Is it possible to achieve this? If yes, Could you please help with the list of steps to perform for this? I would also want to know what are the risks involved.
Some Search Heads show "Up" status but other search heads are always show "Pending".  I want to know that how to solve the issue?
Hello,  I am trying to implement a behavioral rule, that checks if an ip was used in the last 7 days or not. this is who my search looks like :  index=<index> operationName="Sign-in activity" | s... See more...
Hello,  I am trying to implement a behavioral rule, that checks if an ip was used in the last 7 days or not. this is who my search looks like :  index=<index> operationName="Sign-in activity" | stats count by ipAddress | eval is_historical=if(ipAddress IN [ search index=<index>operationName="Sign-in activity" earliest=-7d@d | dedup ipAddress | table ipAddress], "true", "false" ) i got a wrong results and it seems that only the first search was executed, and the eval was failed.   Any Help please ? Regards
Hello, community I am trying to identify ways to make this search faster: index=Win_Logs EventCode IN (528,540,4624) AND user IN (C*,W*,X*) | dedup user | timechart span=1w dc(user) as Users Anyth... See more...
Hello, community I am trying to identify ways to make this search faster: index=Win_Logs EventCode IN (528,540,4624) AND user IN (C*,W*,X*) | dedup user | timechart span=1w dc(user) as Users Anything tstats or metasearch, metadata? Thanks in advance
I would like to get the number of people connected (one successful login session per user per day will suffice) to our network over a month period using earliest and now() attributes. The figures sho... See more...
I would like to get the number of people connected (one successful login session per user per day will suffice) to our network over a month period using earliest and now() attributes. The figures should be presented per week like a chart
I would like to setup a dashboard that tracks the totals for user agents in incoming requests.  I couldn't find a "user agent", "user-agents" or any other field listed.  When I exported the search re... See more...
I would like to setup a dashboard that tracks the totals for user agents in incoming requests.  I couldn't find a "user agent", "user-agents" or any other field listed.  When I exported the search results to CSV, I saw the following heading:  "_raw","_time",cloudaccount,host,index,linecount,message,source,sourcetype,"splunk_server" It appears that the info containing the user agent is contained in the message field, enclosed in double double-quotes. I assume that in order to count each type of user agent, I first need to isolate these values, then count them.  What's the best way to do that?
Clayton Homes faced the increased challenge of strengthening their security posture as they went through rapid digital transformation. The challenge was further exacerbated by the hybrid cloud re... See more...
Clayton Homes faced the increased challenge of strengthening their security posture as they went through rapid digital transformation. The challenge was further exacerbated by the hybrid cloud reality as Clayton Homes moved more deployments to the cloud. They wanted a better way to build a secure and more resilient digital world while migrating to the cloud. Join us in this webinar to hear from Clayton Homes on how to build scalable security while moving to the cloud successfully and efficiently with Splunk. By deploying Splunk Enterprise Security, a data-centric modern information and event management (SIEM) solution in the cloud, Clayton Homes was able to detect and respond to threats quickly. Hear how Splunk enabled Clayton Homes to gain end-to-end visibility across their IT environment with Splunk Cloud Platform without the need to purchase, manage or deploy infrastructure. In the webinar you will learn more from Clayton Homes on the best practices for: Migrating on-prem deployments to the cloud with success Harnessing data-driven insights with scalable security to protect your business and mitigate risks Building a solid foundation for your hybrid cloud with the right tools, expertise and services from Splunk Register Now!
Can an alert be run from a specific Search Head in a clustered environment?  We would like to configure report from a specific search head that is in the clustered environment, and we dont want the ... See more...
Can an alert be run from a specific Search Head in a clustered environment?  We would like to configure report from a specific search head that is in the clustered environment, and we dont want the report to be replicated across all of the SH's.  Can we force to run the report from specific SH based on the app?  Thanks, Dhana
Hi Team, Not able to see the ABAP system details in AppDynamics Controller. Getting the below error. HTTP server yourcompany.saas.appdynamics.com (URI '/controller/rest/applications') responded w... See more...
Hi Team, Not able to see the ABAP system details in AppDynamics Controller. Getting the below error. HTTP server yourcompany.saas.appdynamics.com (URI '/controller/rest/applications') responded with status code 500 (Connection Broken) Regards, Giridhar ^ Post edited by @Ryan.Paredez minor formatting changes
How to replace string using rex with partial matched string? Thank you for your help. For example: I tried to replace "::" (double colon) with ":0:"  (colon zero colon) if it the previous characte... See more...
How to replace string using rex with partial matched string? Thank you for your help. For example: I tried to replace "::" (double colon) with ":0:"  (colon zero colon) if it the previous characters contain ":" followed by "1 to 3 characters"  | rex mode=sed field=ip "s/:.{1,3}::/:.{1,3}:0:/g"   => this does not work because it will literally replace it with ":.{1,3}:0:"   instead of retaining the matched strings Before a0:1::21 b0:1c::21 c0:a13::23 After a0:1:0:21 b0:11:0:21 c0:111:0:23
Hi,   I found a problem in Splunk DB connect when I tried to add a new input. I can add new connections and the current inputs are working. But when I try to add new input or try to use the “SQL ... See more...
Hi,   I found a problem in Splunk DB connect when I tried to add a new input. I can add new connections and the current inputs are working. But when I try to add new input or try to use the “SQL Explorer” after choosing connection I get a “Cannot get schemas“ error message. In the _internal index I found this error message: „Unable to get schemas metadata java.sql.SQLException: Non supported character set (add orai18n.jar in your classpath): EE8ISO8859P2” After this I updated DB Connect and the Oracle JDBC drivers but it did not help. I consulted our DB admins. As it turned out these DBs really differ when it comes to character encoding (EE8ISO8859P2 vs. UTF8). Of course the orai18n.jar file is there with the driver. I found this in the documentation: https://docs.splunk.com/Documentation/DBX/3.14.1/DeployDBX/Troubleshooting#Unicode_decode_errors "Splunk DB Connect requires you to set up your database connection to return results encoded in UTF-8. Consult your database vendor's documentation for instructions about how to do this." Is it possible that DB Connect can only handle Oracle DB with UTF-8 encoding?   Thanks, László