All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Workday add-on 1.1.0 showing blank page or stays on loading on splunk HF 8.2.2. Tried restarting several times, see attached screenshot. Did anyone face a similar issue?
I have a kvstore that I am writing results of a search to. I have a field in the kvstore called ASC_IDX, and this is defined as a number. If I now write a large number (like 1647976533000037808725) t... See more...
I have a kvstore that I am writing results of a search to. I have a field in the kvstore called ASC_IDX, and this is defined as a number. If I now write a large number (like 1647976533000037808725) to this field with outputlookup, I get the following number in the kvstore: -9223372036854775808 This same number appears for all large number I'm trying to put into the kvstore. Is there a limit to the number of digits for a number field in a kvstore?
I have a splunk indexer cluster with a single search head. I'm taking data in via HEC directly to the cluster. The events themselves are not JSON and look like this:   host_name audit[86]: key1=v... See more...
I have a splunk indexer cluster with a single search head. I'm taking data in via HEC directly to the cluster. The events themselves are not JSON and look like this:   host_name audit[86]: key1=val1, key2=val2, key3=val3, key4=[ subkey1: subval1 subkey2: subval2 subkey3: sub val3 subkey4: subval4]   Note that the value of "subkey3" does have a space. It is intentional. Splunk, by default will grab all the = deliminated fields... so key1,2,3,4 are grabbed nicely. BUT, it won't grab the subkeys as they are : deliminated fields and spaces are apparently allowed. I have a regex that will parse them for me:   (?<key>\w+):\s+(?<value>.*?)(?=\s+\w+:|]$)   It leverages some look ahead for the next field. I tried putting this in a props/transforms as such:   props.conf: [my_sourcetype] REPORT-key4 = parse_key4 transforms.conf: [parse_key4] REGEX = (\w+):\s+(.*?)(?=\s+\w+:|]$) FORMAT = $1::$2 REPEAT_MATCH = true   I deploy both files in their own app to the cluster master (and then apply cluster bundle) and the search head (and either use the debug/refresh or restart splunk). But, it is not extracting the fields. Any ideas on why it isn't doing the extraction? Note... the goal is that this should be a search time extraction. I don't need/want it being index time.
We have an on-prem Splunk Enterprise instance using a Deployment server, indexers, search head, etc.  The environment sits on Windows 2019 and Splunk is version 8.2.3. We have recently setup a HTTP... See more...
We have an on-prem Splunk Enterprise instance using a Deployment server, indexers, search head, etc.  The environment sits on Windows 2019 and Splunk is version 8.2.3. We have recently setup a HTTP Event Collector token for HEC Collection.  It is working correctly from Curl both in Health check and absorbing the manual calls from curl and postman, such as: curl -k http://deploymentserver.local:8088/services/collector/raw -H "Authorization:Splunk 1920123a-f2b1-4c46-b848-6fba456789fe7" -d '{"Sourcetype":"log4j","event":"test"}' These particular calls are ingested and searchable within Splunk.  We have opened a ticket with Splunk, but has been less than helpful as the are just directing us to the token setup which as noted above is working. The issue, it is not ingesting any of our actual application logs.  There are no errors, it has the correct token, it's like it's not getting there or rejected.    We've made changes to the sourcetype so there is no criteria as well as set for json, no difference either way.  So we suspect the formatting of our json is incorrect.  Are there any good samples out there? This is what the json looks like:     <?xml version="1.0" encoding="utf-8"?> <Configuration status="INFO" name="cloudhub" packages="com.appforsplunk.ch.logging.appender, com.splunk.logging ,org.apache.logging.log4j"> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%-5p %d [%t] [event: %X{correlationId}] %c: %m%n" /> </Console> <Console name="ConsoleLogUtil" target="SYSTEM_OUT"> <PatternLayout pattern="%m%n" /> </Console> <RollingFile name="file" fileName="${sys:splunkapp.home}${sys:file.separator}logs${sys:file.separator}splunkapp-custom-logging-api.log" filePattern="${sys:splunkapp.home}${sys:file.separator}logs${sys:file.separator}splunkapp-custom-logging-api-%i.log"> <PatternLayout pattern="%-5p %d [%t] [processor: %X{processorPath}; event: %X{correlationId}] %c: %m%n" /> <SizeBasedTriggeringPolicy size="10 MB" /> <DefaultRolloverStrategy max="10"/> </RollingFile> <SplunkHttp name="splunk" url="http://deploymentserver.local:8088/services/collector/raw" token="Splunk 50817720-52e2-4481-a2cf-eb519716354c" disableCertificateValidation="true"> <PatternLayout pattern="%-5p %d [%t] [event: %X{correlationId}] %c: %m%n"/> </SplunkHttp> <Log4J2CloudhubLogAppender name="CLOUDHUB" addressProvider="com.appforsplunk.ch.logging.DefaultAggregatorAddressProvider" applicationContext="com.appforsplunk.ch.logging.DefaultApplicationContext" appendRetryIntervalMs="${sys:logging.appendRetryInterval}" appendMaxAttempts="${sys:logging.appendMaxAttempts}" batchSendIntervalMs="${sys:logging.batchSendInterval}" batchMaxRecords="${sys:logging.batchMaxRecords}" memBufferMaxSize="${sys:logging.memBufferMaxSize}" journalMaxWriteBatchSize="${sys:logging.journalMaxBatchSize}" journalMaxFileSize="${sys:logging.journalMaxFileSize}" clientMaxPacketSize="${sys:logging.clientMaxPacketSize}" clientConnectTimeoutMs="${sys:logging.clientConnectTimeout}" clientSocketTimeoutMs="${sys:logging.clientSocketTimeout}" serverAddressPollIntervalMs="${sys:logging.serverAddressPollInterval}" serverHeartbeatSendIntervalMs="${sys:logging.serverHeartbeatSendIntervalMs}" statisticsPrintIntervalMs="${sys:logging.statisticsPrintIntervalMs}"> <PatternLayout pattern="[%d{MM-dd HH:mm:ss}] %-5p %c{1} [%t]: %m%n" /> </Log4J2CloudhubLogAppender> </Appenders> <Loggers> <AsyncLogger name="org.splunkapp.service.http" level="WARN"/> <AsyncLogger name="org.splunkapp.extension.http" level="WARN"/> <!-- splunkapp logger --> <AsyncLogger name="org.splunkapp.runtime.core.internal.processor.LoggerMessageProcessor" level="INFO"/> <AsyncRoot level="INFO"> <AppenderRef ref="splunk" /> <AppenderRef ref="CLOUDHUB" /> <AppenderRef ref="Console"/> <AppenderRef ref="file" /> </AsyncRoot> </Loggers> </Configuration>         Thanks in advance.
I am looking to create a search that will send an alert when the feed has been delayed for >=70min
Hi Folks, I'm using a query like below. But since subsearch returns more than 10K events, I'm not getting the expected result. Can someone advise me if there is an alternate way to replace subsearc... See more...
Hi Folks, I'm using a query like below. But since subsearch returns more than 10K events, I'm not getting the expected result. Can someone advise me if there is an alternate way to replace subsearch and to achieve the expected result? index="foo" sourcetype="xyz" user!="abc" method=POST (url="*search*aspx*" AND code!=302 AND code!=304 AND code!=401 AND code!=403 AND code!=0) [search index="foo" method_name=pqr message="*Response Time for method pqr*" | fields uniqid] | eval hour=strftime(_time,"%H") | where hour >=7 AND hour <=19 | timechart span=1d count(eval(time_took)) as Total , count(eval(time_took<2000)) as Success, count(eval(time_took>2000)) as misses | sort by "_time" desc Thanks in advance for the help.
I have some api response logs separated by pipe. However there is already field extraction on api response time. the field value is something like "100 ms". I want to create a table on windowed count... See more...
I have some api response logs separated by pipe. However there is already field extraction on api response time. the field value is something like "100 ms". I want to create a table on windowed count of api response time.  The final table I want to create is some thing like responseTime                     count 5 ms.                    {count where responseTime <= 5ms} 10 ms                    {count where responseTime <= 10ms but >5ms}   I can create a simple count table by " base search | stats count by responseTime" which results like responseTime               count 1 ms                                    100 2 ms                                     30   How can I create this windowed stats?
Hello all, It would seem a swift migration to Splunk Add-on for Microsoft Security is highly recommended: "Customers currently utilizing Microsoft 365 Defender Add-on for Splunk are strongly reco... See more...
Hello all, It would seem a swift migration to Splunk Add-on for Microsoft Security is highly recommended: "Customers currently utilizing Microsoft 365 Defender Add-on for Splunk are strongly recommended to migrate to this new Splunk supported add-on after reading the migration section of the documentation." I haven't been able to get this app to work with GCC, has anyone else? Anyone know when that support is coming?
Hi, Can the existing Splunk App(s) be read out with a search? I would like to assign the service to an app via dropdown in a service request (other tool) - so in a first step I have to read out ... See more...
Hi, Can the existing Splunk App(s) be read out with a search? I would like to assign the service to an app via dropdown in a service request (other tool) - so in a first step I have to read out the apps from Splunk and then integrate them into the other program via interface. Is this possible? or are associated with too much effort? thanks
hello When I run the search below, its gives me "4" in results at the _time span = 11h   `index` earliest=@d+7h latest=@d+19h | bin _time span=1h | stats dc(sign_id) as "Signal inc" by _tim... See more...
hello When I run the search below, its gives me "4" in results at the _time span = 11h   `index` earliest=@d+7h latest=@d+19h | bin _time span=1h | stats dc(sign_id) as "Signal inc" by _time     But   `index` earliest=@d+7h latest=@d+19h | stats avg(citrix) as citrix by sam _time | eval PbPerf=if(citrix>200,1,0) | search PbPerf > 0 | bin _time span=1h | stats dc(sam) as "W" by _time | appendcols [ search `index` earliest=@d+7h latest=@d+19h | bin _time span=1h | stats count as W by _time] | appendcols [ search `index` earliest=@d+7h latest=@d+19h | bin _time span=1h | stats dc(signaler_id) as "Signal inc" by _time ] | appendcols [ search `index` earliest=@d+7h latest=@d+19h | bin _time span=1h | stats count as "TEA" by _time] | fillnull value=0 | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime | transpose 0 header_field=time column_name=KPI | sort + KPI    my problem is that I must use this search with other search in order to display results by _time span So when I execute this search with other searches, the results for th primary search is always 4 but it's display the _time span = 11h but for the _time span= 8h!!! whats is the problem please? 
I want to create alert when user approve MFA from different IP than the one he used prior to connection to VPN. So I'm gathering all successful logins from msazure index and I would like to match IP ... See more...
I want to create alert when user approve MFA from different IP than the one he used prior to connection to VPN. So I'm gathering all successful logins from msazure index and I would like to match IP events from second index msvpn. The problem is that IPs from msvpn index (src_ip) does not occur at the same time as IPs from msazure index (properties.ipAddress). The quantity of the events in both indexes are also different. How do I find a single event IP from the second index and match it to a closest time to IP from the first one? I already tried multiple searches and none of them worked. Any help would be much appreciated.    index=msazure* operationName="Sign-in activity" "properties.appliedConditionalAccessPolicies{}.enforcedGrantControls{}"=Mfa sourcetype="azure:aad:signin" "properties.authenticationDetails{}.authenticationStepResultDetail"="MFA successfully completed" properties.mfaDetail.authMethod="*" properties.ipAddress="*" properties.userPrincipalName=testuser* | search NOT properties.networkLocationDetails{}.networkNames{} IN ("xxIPs", "yyIPs") | iplocation properties.ipAddress | rename Country as CC2 | table _time properties.userPrincipalName properties.mfaDetail.authMethod properties.ipAddress CC2 | appendcols [search index=msvpn app="ssl:vpn" http_user_agent="xxx*" user=testuser* src_ip=* | iplocation dvc | rename Country as CC1 | table src_ip CC1] | search properties.ipAddress="*"  
Hi I have an issue where Splunk or the OS is starting to use more and more open files over time. This is causing the CPU to go up and Splunk to get very slow. The image below shows at about 1:3... See more...
Hi I have an issue where Splunk or the OS is starting to use more and more open files over time. This is causing the CPU to go up and Splunk to get very slow. The image below shows at about 1:30 yesterday where the CPU starts to go up on the cluster. The 2nd graph shows the number of files open in the system 1st is using the user log in autoengine (Used to launch Splunk). lsof -u autoengine | awk 'BEGIN { total = 0; } $4 ~ /^[0-9]/ { total += 1 } END { print total }' 2nd is  lsof -u autoengine | grep splunk | awk 'BEGIN { total = 0; } $4 ~ /^[0-9]/ { total += 1 } END { print total }' It looks like one is linked to the other. when I look at the file of what is taking all the open files I am a bit lost, to be honest. Below the restart is the restart of Splunk SH and all the indexers   There is a 2/3 related to crome -  but i am just not sure?             chrome 56068 autoengine mem REG 253,0 15688 8390650 /usr/lib64/libkeyutils.so.1.5 chrome 56068 autoengine mem REG 253,0 58728 8515371 /usr/lib64/libkrb5support.so.0.1 chrome 56068 autoengine mem REG 253,0 11384 8390648 /usr/lib64/libXinerama.so.1.0.0 chrome 56068 autoengine mem REG 253,0 1027384 8406742 /usr/lib64/libepoxy.so.0.0.0 chrome 56068 autoengine mem REG 253,0 35720 9044060 /usr/lib64/libcairo-gobject.so.2.11400.8 chrome 56068 autoengine mem REG 253,0 481320 9044042 /usr/lib64/libGL.so.1.2.0 chrome 56068 autoengine mem REG 253,0 56472 8390526 /usr/lib64/libxcb-render.so.0.0.0 chrome 56068 autoengine mem REG 253,0 15336 8390534 /usr/lib64/libxcb-shm.so.0.0.0 chrome 56068 autoengine mem REG 253,0 179296 8390451 /usr/lib64/libpng15.so.15.13.0 chrome 56068 autoengine mem REG 253,0 189856 9044046 /usr/lib64/libEGL.so.1.0.0 chrome 56068 autoengine mem REG 253,0 698744 8406537 /usr/lib64/libpixman-1.so.0.34.0 chrome 56068 autoengine mem REG 253,0 691736 8390436 /usr/lib64/libfreetype.so.6.10.0 chrome 56068 autoengine mem REG 253,0 255968 8515671 /usr/lib64/libfontconfig.so.1.7.0 chrome 56068 autoengine mem REG 253,0 413936 8599610 /usr/lib64/libharfbuzz.so.0.10302.0 chrome 56068 autoengine mem REG 253,0 6944 8517202 /usr/lib64/libgthread-2.0.so.0.5000.3 chrome 56068 autoengine mem REG 253,0 52032 8599640 /usr/lib64/libthai.so.0.1.6 chrome 56068 autoengine mem REG 253,0 92272 9044056 /usr/lib64/libpangoft2-1.0.so.0.4000.4 chrome 56068 autoengine mem REG 253,0 269416 8517192 /usr/lib64/libmount.so.1.1.0 chrome 56068 autoengine mem REG 253,0 111080 8390378 /usr/lib64/libresolv-2.17.so chrome 56068 autoengine mem REG 253,0 155752 8390430 /usr/lib64/libselinux.so.1 chrome 56068 autoengine mem REG 253,0 15632 8517198 /usr/lib64/libgmodule-2.0.so.0.5000.3 chrome 56068 autoengine mem REG 253,0 90632 8390432 /usr/lib64/libz.so.1.2.7 chrome 56068 autoengine mem REG 253,0 41080 8390354 /usr/lib64/libcrypt-2.17.so chrome 56068 autoengine mem REG 253,0 69960 8390582 /usr/lib64/libavahi-client.so.3.2.9 chrome 56068 autoengine mem REG 253,0 53848 8390584 /usr/lib64/libavahi-common.so.3.5.3 chrome 56068 autoengine mem REG 253,0 2520768 8515374 /usr/lib64/libcrypto.so.1.0.2k chrome 56068 autoengine mem REG 253,0 470376 8515376 /usr/lib64/libssl.so.1.0.2k chrome 56068 autoengine mem REG 253,0 15848 8390438 /usr/lib64/libcom_err.so.2.1 chrome 56068 autoengine mem REG 253,0 210768 8515363 /usr/lib64/libk5crypto.so.3.1 chrome 56068 autoengine mem REG 253,0 963504 8515369 /usr/lib64/libkrb5.so.3.3 chrome 56068 autoengine mem REG 253,0 320776 8515359 /usr/lib64/libgssapi_krb5.so.2.2 chrome 56068 autoengine mem REG 253,0 15744 8390441 /usr/lib64/libplds4.so chrome 56068 autoengine mem REG 253,0 20040 8390440 /usr/lib64/libplc4.so chrome 56068 autoengine mem REG 253,0 32304 8391190 /usr/lib64/libffi.so.6.0.1 chrome 56068 autoengine mem REG 253,0 402384 8390420 /usr/lib64/libpcre.so.1.2.0 chrome 56068 autoengine mem REG 253,0 15512 8390474 /usr/lib64/libXau.so.6.0.0 chrome 56068 autoengine mem REG 253,0 2127336 8390350 /usr/lib64/libc-2.17.so chrome 56068 autoengine mem REG 253,0 88720 8389122 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 chrome 56068 autoengine mem REG 253,0 166328 9027824 /usr/lib64/libgdk_pixbuf-2.0.so.0.3600.5 chrome 56068 autoengine mem REG 253,0 761616 9555002 /usr/lib64/libgdk-3.so.0.2200.10 chrome 56068 autoengine mem REG 253,0 7466528 9555004 /usr/lib64/libgtk-3.so.0.2200.10  
I need to filter multivalue of strings by substring/character. field coordinatorsID: a##1, b##2, c##3, d##3 field expertiseLevel can be one of  "1", "2", "3" Exmple: expertiseLevel = "3" r... See more...
I need to filter multivalue of strings by substring/character. field coordinatorsID: a##1, b##2, c##3, d##3 field expertiseLevel can be one of  "1", "2", "3" Exmple: expertiseLevel = "3" result ->  c##3, d##3 What I tried: attempt1:  | eval coordinatorsID_filtered = mvfilter(like(coordinatorsID,"%$expertiseLevel$"))            error attempt2: | eval expertiseLevel = case(expertiseLevel == "1", "%1", expertiseLevel == "2", "%2", expertiseLevel == "3", "%3")  | eval coordinatorsID_filtered = mvfilter(like(coordinatorsID,$expertiseLevel$))                   null  
I am seeing x509 certificate error on splunkd.log, I will like to know if I can turn off the SSL certificate feature off and what is the Splunk cloud configuration file and can I find it or what is t... See more...
I am seeing x509 certificate error on splunkd.log, I will like to know if I can turn off the SSL certificate feature off and what is the Splunk cloud configuration file and can I find it or what is the way to resolve the x509 certificate error. 
Hi Folks, Can someone help me on the below. I have the below message in the log and need to extract the time portion alone. [END~~Method.Search_Button] Response Time Search_Button: 00:00:00.3004626... See more...
Hi Folks, Can someone help me on the below. I have the below message in the log and need to extract the time portion alone. [END~~Method.Search_Button] Response Time Search_Button: 00:00:00.3004626 The output I need is, 00:00:00.3004626 Also if possible, can suggest how I can get the value as millisecond? For the above message, I should get 300ms. If the output is "00:00:01.250" then i should get as 1250ms. Thanks.
Hello, I submitted my Splunk Add-on for Cloud vetting. it is failed on both "VICTORIA" and "CLASSIC", however no errors or failures were found in the compatibility report: Failures 0 ... See more...
Hello, I submitted my Splunk Add-on for Cloud vetting. it is failed on both "VICTORIA" and "CLASSIC", however no errors or failures were found in the compatibility report: Failures 0 Warnings 11 Errors 0 Not Applicable 42 Manual Checks 17 Skipped 0 Successes 143 I used the Splunk Add-on Builder and did not add any libraries myself. All warnings and manual checks are from within the add-on builder - aob_py3. example:  Possible insecure HTTP Connection. Match: urllib2.urlopen Positional arguments, ["?"]; Keyword arguments, {"null": "?"} File: bin/ta_island_add_on_for_splunk/aob_py3/jsonschema/compat.py Line Number: 36 How can I understand why the vetting failed? How can I use the add-on builder to build an add-on that passes the cloud vetting? Thank you, Omer.
hi all, I am considering updating our index retention policy. However, I am not sure how to choose the maximum possible allocated space. We have a few indexes and one of them takes about half of the ... See more...
hi all, I am considering updating our index retention policy. However, I am not sure how to choose the maximum possible allocated space. We have a few indexes and one of them takes about half of the total index volume. We would like to keep the data for as long as possible, however have limited storage. For simplicity, let's say we have 1 TB storage and a single instance, 10 indexes. As far as I understood, it would be best to choose MaxTotalDataSizeMB to set the max MB per index. However, I can't divide the space of 1TB per index as only some of the space can be taken up by indexed data. So my questions are: 1) How should I choose what the MaxTotalDataSizeMB per index is? 2) How can I use to the maximum server storage without getting Splunk problems?  3) Is it reasonable to calculate the total index storage by looking at the total storage outside of  /opt/splunk/var/lib directory and then deciding how much storage can be allocated to indexes? What approach do you recommend?  4) What approach would you recommend in my case? Is it reasonable to keep data for as long as possible and are there reasons for avoiding this approach?
hi, Can someone help to correct the query provided below which will send alert if detected a STOPPED status for 3 consecutive times within a specific time range like for ex. from 7am-8pm.   ... See more...
hi, Can someone help to correct the query provided below which will send alert if detected a STOPPED status for 3 consecutive times within a specific time range like for ex. from 7am-8pm.   index="hcg_ph_t2_bigdataservices_prod" sourcetype="be:streaming-services" | search kafka_count="STOPPED" | stats count by _time,sourcetype,STOPPED | sort count desc | eval threshold=3 | where count >=threshold  
I have a systemout.log file and I am indexing using pretrained sourcetype websphere_trlog_sysout. Currently there is an issue with masking I have created props.conf in the deployment app and deploy... See more...
I have a systemout.log file and I am indexing using pretrained sourcetype websphere_trlog_sysout. Currently there is an issue with masking I have created props.conf in the deployment app and deployed to universal forwarder as below. Created transforms.conf to mask the data. Seems the issue is still same.    [websphere_trlog_sysout] TIME_FORMAT = %d-%m-%y %H:%M:%S:%3Q %Z TIME_PREFIX = ^\[ FORMAT = $1-$2-$3 $4:$5:$6:$7 $8 TRANSFORMS-anonymize = session-anonymizer [session-anonymizer] REGEX = XXXXX FORMAT = $1XXXX$2 DEST_KEY = _raw  
Can anyone help with understanding the latest openssl vulnerability CVE-2022-0778 and how/if it affects Splunk installations? https://www.cve.org/CVERecord?id=CVE-2022-0778