All Topics

Top

All Topics

In the last month, the Splunk Threat Research Team (STRT) has had 1 release of new security content via the Enterprise Security Content Update (ESCU) app (v4.12.0.). With this release, there are 8 ne... See more...
In the last month, the Splunk Threat Research Team (STRT) has had 1 release of new security content via the Enterprise Security Content Update (ESCU) app (v4.12.0.). With this release, there are 8 new detections and 1 new analytic story now available in Splunk Enterprise Security via the ESCU application update process. Content highlights include: A Forest Blizzard analytic story that contains detections to detect “Living Off The Land” attack techniques using headless web browsers to exfiltrate data files through legitimate platforms like Mockbin via ZIP archives containing LNK files. These techniques were observed in the cyberattack on Ukraine’s energy infrastructure, orchestrated via deceptive emails to steal NTLMv2 hashes by various advanced persistent threat (APT) groups. Six new detections related to Windows Active Directory enumeration, specifically to detect activity related to the usage of a popular red team tool such as Powerview, which are typically used for reconnaissance by attackers.  New Analytic Story:  Forest Blizzard New Detections: Windows Find Domain Organizational Units with GetDomainOU Headless Browser Usage Headless Browser Mockbin or Mocky Request Windows Get Local Admin with FindLocalAdminAccess Windows Forest Discovery with GetForestDomain Windows Find Interesting ACL with FindInterestingDomainAcl Windows AD Privileged Object Access Activity (External Contributor: Steven Dick) Windows AD Abnormal Object Access Activity (External Contributor: Steven Dick) The team has also published the following blogs: Sharing is Not Caring: Hunting for Network Share Discovery Defending the Gates: Understanding and Detecting Ave Maria (Warzone) RAT Mockbin and the Art of Deception: Tracing Adversaries, Going Headless and Mocking APIs For all our tools and security content, please visit research.splunk.com.  — The Splunk Threat Research Team
Hello I'm trying to count events by field called "UserAgent" If im searching for the events without any calculated field im getting results from different UserAgents But once im using eval, I don'... See more...
Hello I'm trying to count events by field called "UserAgent" If im searching for the events without any calculated field im getting results from different UserAgents But once im using eval, I don't get the expected results For example: I've tried this eval and im getting only "android" also im searching for "ios" only with    "ContextData.UserAgent"=*ios*   as part of my query    | eval UserAgent = if("ContextData.UserAgent"="*ios*","ios","android")    what im doing wrong ?
Hello All! Trying to set up CAC Based Auth for SPLUNK 9.1.1 on Windows Server 2022 for the first time. I have successfully setup LDAP and am able to sign into Splunk using an AD username/password wi... See more...
Hello All! Trying to set up CAC Based Auth for SPLUNK 9.1.1 on Windows Server 2022 for the first time. I have successfully setup LDAP and am able to sign into Splunk using an AD username/password without any issues. When I add in the requiredClientCert, enableCertBasedAuth and certBasedUserAuthMethod stanzas, and attempt to access the Splunk GUI, all users are immediately greeted with an 'Unauthorized' message. I've been fighting this for about a week now, and Splunk support hasn't been able to help me pin this down yet. Any assistance would be greatly appreciated. I've ensured TLS 1.2 registry keys exist in SCHANNEL to Enable TLS 1.2. Corresponding logs from splunkd.log for the logon attempt are:   09-29-2023 09:02:43.191 -0400 INFO AuthenticationProviderLDAP [12404 TcpChannelThread] - Could not find user=" \x84\x07\xd8\xb6\x05" with strategy="123_LDAP" 09-29-2023 09:02:43.192 -0400 ERROR HTTPAuthManager [12404 TcpChannelThread] - SSO failed - User does not exist: \x84\x07\xd8\xb6\x05 09-29-2023 09:02:43.192 -0400 ERROR UiAuth [12404 TcpChannelThread] - user= \x84\x07\xd8\xb6\x05 action=login status=failure reason=sso-failed useragent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36" clientip=<ip> 09-29-2023 09:03:10.247 -0400 ERROR UiAuth [12404 TcpChannelThread] - SAN OtherName not found for configured OIDs in client certificate 09-29-2023 09:03:10.247 -0400 ERROR UiAuth [12404 TcpChannelThread] - CertBasedUserAuth: error fetching username from client certificate   authentication.conf:   [splunk_auth] minPasswordLength = 8 minPasswordUppercase = 0 minPasswordLowercase = 0 minPasswordSpecial = 0 minPasswordDigit = 0 [authentication] authSettings = 123_LDAP authType = LDAP [123_LDAP] SSLEnabled = 1 anonymous_referrals = 0 bindDN = CN=<Account>,OU=Service Accounts,OU=<Command Accounts>,DC=<Command>,DC=NAVY,DC=MIL bindDNpassword = <removed> charset = utf8 emailAttribute = mail enableRangeRetrieval = 0 groupBaseDN = OU=SPLUNK Groups,OU=Groups,DC=<command>,DC=NAVY,DC=MIL groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn host = DC.<Command>.NAVY.MIL nestedGroups = 1 network_timeout = 20 pagelimit = -1 port = 636 realNameAttribute = displayName sizelimit = 1000 timelimit = 15 userBaseDN = OU=Users,OU=<Command Accounts>,DC=<Command>,DC=NAVY,DC=MIL userNameAttribute = userprincipalname [roleMap_LDAP] admin = SPLUNK AUDITOR can_delete = SPLUNK AUDITOR network = SPLUNK NETWORK user = SPLUNK AUDITOR;SPLUNK USERS   web.conf   [settings] enableSplunkWebSSL = true privKeyPath = $SPLUNK_HOME\etc\auth\dodCerts\splunk2_key.pem serverCert = $SPLUNK_HOME\etc\auth\dodCerts\splunk2_server.pem sslPassword = <removed> requireClientCert = true sslRootCAPath = $SPLUNK_HOME\etc\auth\dodCerts\DoDRootCA3.pem enableCertBasedUserAuth=true SSOMode=permissive trustedIP = 127.0.0.1 certBasedUserAuthMethod=PIV   server.conf   [sslConfig] enableSplunkdSSL = true sslRootCAPath = $SPLUNK_HOME\etc\auth\dodCerts\DoDRootCA3.pem serverCert = $SPLUNK_HOME\etc\auth\dodCerts\splunk2_server.pem sslPassword = <removed> cliVerifyServerName = true sslVersions = tls1.2 sslVerifyServerCert = true [general] serverName = SPKVSPLUNK2 pass4SymmKey = <removed> trustedIP = 127.0.0.1            
Hi, I've been hunting through the REST API Documentation , as well as searching online, for the correct endpoint/curl request for maintaining sourcetypes, but haven't found anything. It is a trivial... See more...
Hi, I've been hunting through the REST API Documentation , as well as searching online, for the correct endpoint/curl request for maintaining sourcetypes, but haven't found anything. It is a trivial task using the UI, but my use case is that I want to spin up a splunk instance using a script, as part of an automated test process, so UI input won' meet the requirement. Can anyone point me in the right direction?
Hi,  We upgraded the Splunk DB Connect app to version 3.14.1, and the drivers as well ojdbc11.jar v.21.11 (Innovation Release)along  with orai18n.jar. While trying to add new input we noticed that f... See more...
Hi,  We upgraded the Splunk DB Connect app to version 3.14.1, and the drivers as well ojdbc11.jar v.21.11 (Innovation Release)along  with orai18n.jar. While trying to add new input we noticed that for some connections we got the error "cannot get schemas". However we are able to add inputs and connections are working. The versions of databases are oracle 19.19 and 12.1.0.2. We downgraded the version of the driver to ojdbc11.jar v.19.20 (Long Term Release) along with respective orai18n.jar but still we "cannot get schemas". All the permissions to the user are given.  In the _internal index we encounter this error message: „Unable to get schemas metadata java.sql.SQLException: Non supported character set (add orai18n.jar in your classpath): EE8ISO8859P2”  but the orai18n.jar is already there. Any kind of help or idea would be appreciated. Thank you in advance !
Hi there, I want to send email who have 4625 over 20 login fail count. I have search there is no problem about search but i couldn't figure out to send emails to specific users who have 4625 login f... See more...
Hi there, I want to send email who have 4625 over 20 login fail count. I have search there is no problem about search but i couldn't figure out to send emails to specific users who have 4625 login fail events. I know trigger action like send mail but i couldn't figure out how to send specific users. I don't want to send email to a group, i need send email to specific users who have 4625 events.   Any help would be appreciated!
Hi Splunk Experts, The timewrap command is using d(24 hr) format, but I'm wondering is it possible to make it Today format. Ex: If Current time is 10AM, then it's displaying timechart of 12 AM ... See more...
Hi Splunk Experts, The timewrap command is using d(24 hr) format, but I'm wondering is it possible to make it Today format. Ex: If Current time is 10AM, then it's displaying timechart of 12 AM to 10AM (12, 14, 16, 18, 20, 22, 00, 02, 04, 06, 08, 10), but I'm looking for 00 AM to 22 (00, 02, 04, 06, 08, 10, 12, 14, 16, 18, 20, 22). Any advice would be much appreciated.   index="_internal" error | timechart span=10m count as Counts | timewrap d series=exact time_format="%Y-%m-%d"  
Some of the event logs in Splunk are getting truncated at the beginning. Tried some prop's to break before date, line_breaking at new line but nothing seems to be working. Truncated events 9/29/23... See more...
Some of the event logs in Splunk are getting truncated at the beginning. Tried some prop's to break before date, line_breaking at new line but nothing seems to be working. Truncated events 9/29/23 5:40:46.000 AM entFacing:1x.1xx.1xx.2xx/4565 to inside:1x.9x.x4x.x4x/43 duration 0:00:00 bytes 0 9/29/23 5:40:36.000 AM 53 (1x.x8.2xx.2xx/34) 9/29/23 5:37:21.000 AM bytes 1275 Well parsed events -  2023-09-29T05:57:57-04:00 1x.xx.2.1xx %ASA-6-302014: Teardown TCP connection 758830654 for ARCC:1xx.x7.9x.1x/xx to inside:1x.2xx.6x.x1/xx17 duration 0:00:00 bytes 0 Failover primary closed 2023-09-29T05:57:57-04:00 1x.xx.2.1xx %ASA-6-302021: Teardown ICMP connection for faddr 1x0.x5.0.1x/0 gaddr 1x.2x6.1xx6.x6/0 laddr 1x.xx6.1xx.x6/0 type 3 code 1   My props TZ = UTC SHOULD_LINEMERGE=false NO_BINARY_CHECK=true CHARSET=UTF-8 disabled=false TIME_FORMAT=%Y-%m-%dT%H:%M:%S MAX_TIMESTAMP_LOOKAHEAD=32
Hello comrades,   I'm just curios is there anyway to shorten frequent words? For example: <Data Name='IpAddress'>::ffff:10.95.81.99</Data> IpAddress to ipaddr or something like IPa.   Many than... See more...
Hello comrades,   I'm just curios is there anyway to shorten frequent words? For example: <Data Name='IpAddress'>::ffff:10.95.81.99</Data> IpAddress to ipaddr or something like IPa.   Many thanks,  
Hello, I was trying to explore all the null values in my index but is it not working as expected do we need any changes in the search  index=vpn earliest=-7d | fieldsummary | where match(values... See more...
Hello, I was trying to explore all the null values in my index but is it not working as expected do we need any changes in the search  index=vpn earliest=-7d | fieldsummary | where match(values, "^\[{\"value\":\"null\",\"count\":\d+\}\]$") Thanks  
Hi there I've run into an issue where I can sort of guess why I'm having issues though have no clear idea regarding how to solve it. In our distributed environment we have a "lookup app" in our dep... See more...
Hi there I've run into an issue where I can sort of guess why I'm having issues though have no clear idea regarding how to solve it. In our distributed environment we have a "lookup app" in our deployer, TA_lookups/lookups/lookupfile.csv Recently a coworker added a few new lookup files and made additions to the file in question. This is where the problem manifests, logging onto the deployer, checking that the correct files are present in /opt/splunk/etc/shcluster/apps/TA_lookups/lookups/lookupfile.csv Everything looks great. Applying the bundle worked without any complaints/errors. All the new csv files show up in the cluster and are accesible from the GUI, however. This one file, the "lookupfile.csv" is not updated. So I can sort of guess that it may have something to do with the file being in use or something, though I am stompt as to how I should go about solving this? I've tried making some additional changes to the file, checked for any wierd linebraking or something, and nothing. I can se from the CLI that this one file has not been modified since the initial deployment, so the deployer applies the bundle, there are no complaints on either end that I can find, it just skips this one pre-existing csv file completely and as far as I can see, silently. What do I do here? Is there a way to "force" the push? Is the only way to solve this to just manually remove the app from the SH cluster an push again? All suggestions are welcome Best regards
Hi Team,   We are currently using splunk version 7.2, it was installed by a third party and currently we don't have info on the login credentials used to download the splunk earlier. if I download ... See more...
Hi Team,   We are currently using splunk version 7.2, it was installed by a third party and currently we don't have info on the login credentials used to download the splunk earlier. if I download the latest version with free trial and update the splunk version, will it update the existing license or we have to download with the same login as earlier to get the license?   Thanks and Regards, Shalini S
Hi, I had blacklisted C:\\Program Files\\SplunkUniversalForwarder\\bin\\splunk.exe  in inputs.conf  of Deploymentserver. blacklist3 = EvenCode="4688" Message="(?:New Process Name:).+(?:SplunkUnive... See more...
Hi, I had blacklisted C:\\Program Files\\SplunkUniversalForwarder\\bin\\splunk.exe  in inputs.conf  of Deploymentserver. blacklist3 = EvenCode="4688" Message="(?:New Process Name:).+(?:SplunkUniversalForwarder\\bin\\splunk.exe) Still I can see the logs ingestion into splunk,  How we can stop this ingestion.
Hello, In K8S, on a pod running a Spring Boot 3.x application (with OpenJDK 17) auto-instrumented by cluster-agent, the Java Agent fails on startup: [AD Agent init] Wed Sep 27 22:27:38 PDT 2023[INF... See more...
Hello, In K8S, on a pod running a Spring Boot 3.x application (with OpenJDK 17) auto-instrumented by cluster-agent, the Java Agent fails on startup: [AD Agent init] Wed Sep 27 22:27:38 PDT 2023[INFO]: JavaAgent - Java Agent Directory [/opt/appdynamics-java/ver22.9.0.34210] [AD Agent init] Wed Sep 27 22:27:38 PDT 2023[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdynamics-java/ver22.9.0.34210] Agent logging directory set to [/opt/appdynamics-java/ver22.9.0.34210/logs] [AD Agent init] Wed Sep 27 22:27:38 PDT 2023[INFO]: JavaAgent - Agent logging directory set to [/opt/appdynamics-java/ver22.9.0.34210/logs] Could not start Java Agent, disabling the agent with exception java.lang.NoClassDefFoundError: Could not initialize class org.apache.logging.log4j.message.ReusableMessageFactory, Please check log files In the pod, the jar file (log4j-api) containing the ReusableMessageFactory is there (part of the appdynamics java-agent): sh-4.4$ pwd /opt/appdynamics-java/ver22.9.0.34210/lib/tp sh-4.4$ ls log4j* log4j-api-2.17.1.1.9.cached.packages.txt log4j-core-2.17.1.1.9.cached.packages.txt log4j-jcl-2.17.1.cached.packages.txt log4j-api-2.17.1.1.9.jar log4j-core-2.17.1.1.9.jar log4j-jcl-2.17.1.jar log4j-api-2.17.1.1.9.jar.asc log4j-core-2.17.1.1.9.jar.asc log4j-jcl-2.17.1.jar.asc From the POD manifest: - name: JAVA_TOOL_OPTIONS value: ' -Dappdynamics.agent.accountAccessKey=$(APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY) -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -Dappdynamics.agent.startup.log.level=debug -Dappdynamics.agent.reuse.nodeName.prefix=eric-tmo-des-ms-entitlements -javaagent:/opt/appdynamics-java/javaagent.jar' I tried with the latest java-agent (23.9) but same result. I don't seem to have the problem with SpringBoot 2.7 (which does include log4j-api as opposed to 3.x). It seems the classloader can't find the class from the java-agent distribution.)  Has anyone encountered this ?  Thank you.
I m trying to login splunk using my sc_admin user through shell script where i want to login and fetch the logs according to the string which i will give but it is failing could you please help me fo... See more...
I m trying to login splunk using my sc_admin user through shell script where i want to login and fetch the logs according to the string which i will give but it is failing could you please help me for the same script: #!/bin/bash # Splunk API endpoint SPLUNK_URL="https://prd-p-cbutz.splunkcloud.com:8089" # Splunk username and password USERNAME=$Username PASSWORD=$Password # Search query to retrieve error messages (modify this as needed) SEARCH_QUERY="sourcetype=error" # Maximum number of results to retrieve MAX_RESULTS=10 response=$(curl -k -s -v -u "$USERNAME:$PASSWORD" "$SPLUNK_URL/services/auth/login" -d "username=$USERNAME&password=$PASSWORD") echo "Response from login endpoint: $response" # Authenticate with Splunk and obtain a session token #SESSION_TOKEN=$(curl -k -s -u "$USERNAME:$PASSWORD" "$SPLUNK_URL/services/auth/login" -d "username=$USERNAME&password=$PASSWORD" | xmllint --xpath "//response/sessionKey/text()" -) SESSION_TOKEN=$(curl -k -s -v -u "$USERNAME:$PASSWORD" "$SPLUNK_URL/services/auth/login" -d "username=$USERNAME&password=$PASSWORD" | grep -oP '<sessionKey>\K[^<]+' | awk '{print $1}') if [ -z "$SESSION_TOKEN" ]; then echo "Failed to obtain a session token. Check your credentials or Splunk URL." exit 1 fi # Perform a search and retrieve error messages SEARCH_RESULTS=$(curl -k -s -u ":$SESSION_TOKEN" "$SPLUNK_URL/services/search/jobs/export" -d "search=$SEARCH_QUERY" -d "count=$MAX_RESULTS") # Check for errors in the search results if [[ $SEARCH_RESULTS == *"ERROR"* ]]; then echo "Error occurred while fetching search results:" echo "$SEARCH_RESULTS" exit 1 fi # Parse the JSON results and extract relevant information echo "Splunk Error Messages:" echo "$SEARCH_RESULTS" | jq -r '.result | .[] | .sourcetype + ": " + .message' # Clean up: Delete the search job curl -k -u ":$SESSION_TOKEN" "$SPLUNK_URL/services/search/jobs" -X DELETE # Logout: Terminate the session curl -k -u ":$SESSION_TOKEN" "$SPLUNK_URL/services/auth/logout" exit 0 even i m also not sure about is i m using the correct port number or not  error:  $ bash abc.sh * Trying 44.196.237.135:8089... * connect to 44.196.237.135 port 8089 failed: Timed out * Failed to connect to prd-p-cbutz.splunkcloud.com port 8089 after 21335 ms: Couldn't connect to server * Closing connection 0 Response from login endpoint: * Trying 44.196.237.135:8089... * connect to 44.196.237.135 port 8089 failed: Timed out * Failed to connect to prd-p-cbutz.splunkcloud.com port 8089 after 21085 ms: Couldn't connect to server * Closing connection 0 Failed to obtain a session token. Check your credentials or Splunk URL.
Hi, I have Error logs which is having more than 50 lines but requirement is to be displayed for 1st 10 lines instead more than 50 and there is no common statement in each events to write it in the r... See more...
Hi, I have Error logs which is having more than 50 lines but requirement is to be displayed for 1st 10 lines instead more than 50 and there is no common statement in each events to write it in the regex. So, Kindly help.
Hi Splunkers,  I'm trying to extract the fields from the raw event can you help if this can be done through rex or substr and provide examples if possible. Sample Event [August 28, 2023 7:22:45 PM... See more...
Hi Splunkers,  I'm trying to extract the fields from the raw event can you help if this can be done through rex or substr and provide examples if possible. Sample Event [August 28, 2023 7:22:45 PM EDT] APPLE Interface IF-abcef23fw2/31 [WAN14] Disabled (100%) Designate that a disabled port or surface is the root cause. This event can be circumvent by setting the SuppressDisabledAlerts to FALSE.   Expected new fields as follows  1 ) Fruit = APPLE 2) Test = Interface IF-abcef23fw2/31 [WAN14] Disabled (100%) 3) Timestamp = August 28, 2023 7:22:45 PM EDT 4) Message = Interface IF-abcef23fw2/31 [WAN14] Disabled (100%) Designate that a disabled port or surface is the root cause. This event can be circumvent by setting the SuppressDisabledAlerts to FALSE.   Please Advise
Dashboard xml: I am using this dashboard  to Schedule PDF report, and all panels are showing data for 7 days. I need to show the time period at the top  of the report like Time Period: 01-17-2023 ... See more...
Dashboard xml: I am using this dashboard  to Schedule PDF report, and all panels are showing data for 7 days. I need to show the time period at the top  of the report like Time Period: 01-17-2023 to 01-23-2023 how can i do this??     <dashboard> <label> Dashboard title</label> <row> <panel> <title>first panel</title> <single> <search> <query>|tstats count as internal_logs where index=_internal </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>second panel</title> <single> <search> <query>|tstats count as audit_logs where index=_audit </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>Third panel</title> <single> <search> <query>|tstats count as main_logs where index=main </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> </dashboard>      
I've got the following query to detect that a worker instance of mine is actually doing what it's supposed to on a regular basis. If it doesn't in a particular environment, the query won't return a r... See more...
I've got the following query to detect that a worker instance of mine is actually doing what it's supposed to on a regular basis. If it doesn't in a particular environment, the query won't return a row for that environment. I thought perhaps I could join the results with a literal dataset of environments, to ensure there is a row for each environment, but despite looking over the documentation, I can't find a way to make the join work. Admittedly, I'm new to Splunk querying, so might be missing something obvious, or there might be some other way of doing this without `join`.   | mstats sum(worker.my_metric) AS my_metric WHERE index="service_metrics" AND host=my-worker-* earliest=-2h BY host | eval env = replace(host, "^my-worker-(?<env>[^-]+)$", "\1") | stats sum(my_metric) AS my_metric BY env | eval active = IF(my_metric > 0, "yes", "no") | join type=right left=M right=E WHERE M.env = E.env from [{ env: "dev" }, { env: "beta" }, { env: "prod" }]      
Hello, everyone. I just ran into an issue where a stanza within apps\SplunkUniversalForwarder\local\inputs.conf on a forwarder is overwriting other apps\AppName\local\inputs.conf  from other apps in... See more...
Hello, everyone. I just ran into an issue where a stanza within apps\SplunkUniversalForwarder\local\inputs.conf on a forwarder is overwriting other apps\AppName\local\inputs.conf  from other apps in the apps folder. I would like to either disable this app, or delete the \SplunkUniversalForwarder\local folder or delete the stanza. The problem is that this has happened on multiple hosts and I need an automated method of doing this. Does anyone have an idea so that this default app that I don't even want to touch doesn't overwrite my own actually used apps? Thanks