All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hello When I run the search below, its gives me "4" in results at the _time span = 11h   `index` earliest=@d+7h latest=@d+19h | bin _time span=1h | stats dc(sign_id) as "Signal inc" by _tim... See more...
hello When I run the search below, its gives me "4" in results at the _time span = 11h   `index` earliest=@d+7h latest=@d+19h | bin _time span=1h | stats dc(sign_id) as "Signal inc" by _time     But   `index` earliest=@d+7h latest=@d+19h | stats avg(citrix) as citrix by sam _time | eval PbPerf=if(citrix>200,1,0) | search PbPerf > 0 | bin _time span=1h | stats dc(sam) as "W" by _time | appendcols [ search `index` earliest=@d+7h latest=@d+19h | bin _time span=1h | stats count as W by _time] | appendcols [ search `index` earliest=@d+7h latest=@d+19h | bin _time span=1h | stats dc(signaler_id) as "Signal inc" by _time ] | appendcols [ search `index` earliest=@d+7h latest=@d+19h | bin _time span=1h | stats count as "TEA" by _time] | fillnull value=0 | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime | transpose 0 header_field=time column_name=KPI | sort + KPI    my problem is that I must use this search with other search in order to display results by _time span So when I execute this search with other searches, the results for th primary search is always 4 but it's display the _time span = 11h but for the _time span= 8h!!! whats is the problem please? 
I want to create alert when user approve MFA from different IP than the one he used prior to connection to VPN. So I'm gathering all successful logins from msazure index and I would like to match IP ... See more...
I want to create alert when user approve MFA from different IP than the one he used prior to connection to VPN. So I'm gathering all successful logins from msazure index and I would like to match IP events from second index msvpn. The problem is that IPs from msvpn index (src_ip) does not occur at the same time as IPs from msazure index (properties.ipAddress). The quantity of the events in both indexes are also different. How do I find a single event IP from the second index and match it to a closest time to IP from the first one? I already tried multiple searches and none of them worked. Any help would be much appreciated.    index=msazure* operationName="Sign-in activity" "properties.appliedConditionalAccessPolicies{}.enforcedGrantControls{}"=Mfa sourcetype="azure:aad:signin" "properties.authenticationDetails{}.authenticationStepResultDetail"="MFA successfully completed" properties.mfaDetail.authMethod="*" properties.ipAddress="*" properties.userPrincipalName=testuser* | search NOT properties.networkLocationDetails{}.networkNames{} IN ("xxIPs", "yyIPs") | iplocation properties.ipAddress | rename Country as CC2 | table _time properties.userPrincipalName properties.mfaDetail.authMethod properties.ipAddress CC2 | appendcols [search index=msvpn app="ssl:vpn" http_user_agent="xxx*" user=testuser* src_ip=* | iplocation dvc | rename Country as CC1 | table src_ip CC1] | search properties.ipAddress="*"  
Hi I have an issue where Splunk or the OS is starting to use more and more open files over time. This is causing the CPU to go up and Splunk to get very slow. The image below shows at about 1:3... See more...
Hi I have an issue where Splunk or the OS is starting to use more and more open files over time. This is causing the CPU to go up and Splunk to get very slow. The image below shows at about 1:30 yesterday where the CPU starts to go up on the cluster. The 2nd graph shows the number of files open in the system 1st is using the user log in autoengine (Used to launch Splunk). lsof -u autoengine | awk 'BEGIN { total = 0; } $4 ~ /^[0-9]/ { total += 1 } END { print total }' 2nd is  lsof -u autoengine | grep splunk | awk 'BEGIN { total = 0; } $4 ~ /^[0-9]/ { total += 1 } END { print total }' It looks like one is linked to the other. when I look at the file of what is taking all the open files I am a bit lost, to be honest. Below the restart is the restart of Splunk SH and all the indexers   There is a 2/3 related to crome -  but i am just not sure?             chrome 56068 autoengine mem REG 253,0 15688 8390650 /usr/lib64/libkeyutils.so.1.5 chrome 56068 autoengine mem REG 253,0 58728 8515371 /usr/lib64/libkrb5support.so.0.1 chrome 56068 autoengine mem REG 253,0 11384 8390648 /usr/lib64/libXinerama.so.1.0.0 chrome 56068 autoengine mem REG 253,0 1027384 8406742 /usr/lib64/libepoxy.so.0.0.0 chrome 56068 autoengine mem REG 253,0 35720 9044060 /usr/lib64/libcairo-gobject.so.2.11400.8 chrome 56068 autoengine mem REG 253,0 481320 9044042 /usr/lib64/libGL.so.1.2.0 chrome 56068 autoengine mem REG 253,0 56472 8390526 /usr/lib64/libxcb-render.so.0.0.0 chrome 56068 autoengine mem REG 253,0 15336 8390534 /usr/lib64/libxcb-shm.so.0.0.0 chrome 56068 autoengine mem REG 253,0 179296 8390451 /usr/lib64/libpng15.so.15.13.0 chrome 56068 autoengine mem REG 253,0 189856 9044046 /usr/lib64/libEGL.so.1.0.0 chrome 56068 autoengine mem REG 253,0 698744 8406537 /usr/lib64/libpixman-1.so.0.34.0 chrome 56068 autoengine mem REG 253,0 691736 8390436 /usr/lib64/libfreetype.so.6.10.0 chrome 56068 autoengine mem REG 253,0 255968 8515671 /usr/lib64/libfontconfig.so.1.7.0 chrome 56068 autoengine mem REG 253,0 413936 8599610 /usr/lib64/libharfbuzz.so.0.10302.0 chrome 56068 autoengine mem REG 253,0 6944 8517202 /usr/lib64/libgthread-2.0.so.0.5000.3 chrome 56068 autoengine mem REG 253,0 52032 8599640 /usr/lib64/libthai.so.0.1.6 chrome 56068 autoengine mem REG 253,0 92272 9044056 /usr/lib64/libpangoft2-1.0.so.0.4000.4 chrome 56068 autoengine mem REG 253,0 269416 8517192 /usr/lib64/libmount.so.1.1.0 chrome 56068 autoengine mem REG 253,0 111080 8390378 /usr/lib64/libresolv-2.17.so chrome 56068 autoengine mem REG 253,0 155752 8390430 /usr/lib64/libselinux.so.1 chrome 56068 autoengine mem REG 253,0 15632 8517198 /usr/lib64/libgmodule-2.0.so.0.5000.3 chrome 56068 autoengine mem REG 253,0 90632 8390432 /usr/lib64/libz.so.1.2.7 chrome 56068 autoengine mem REG 253,0 41080 8390354 /usr/lib64/libcrypt-2.17.so chrome 56068 autoengine mem REG 253,0 69960 8390582 /usr/lib64/libavahi-client.so.3.2.9 chrome 56068 autoengine mem REG 253,0 53848 8390584 /usr/lib64/libavahi-common.so.3.5.3 chrome 56068 autoengine mem REG 253,0 2520768 8515374 /usr/lib64/libcrypto.so.1.0.2k chrome 56068 autoengine mem REG 253,0 470376 8515376 /usr/lib64/libssl.so.1.0.2k chrome 56068 autoengine mem REG 253,0 15848 8390438 /usr/lib64/libcom_err.so.2.1 chrome 56068 autoengine mem REG 253,0 210768 8515363 /usr/lib64/libk5crypto.so.3.1 chrome 56068 autoengine mem REG 253,0 963504 8515369 /usr/lib64/libkrb5.so.3.3 chrome 56068 autoengine mem REG 253,0 320776 8515359 /usr/lib64/libgssapi_krb5.so.2.2 chrome 56068 autoengine mem REG 253,0 15744 8390441 /usr/lib64/libplds4.so chrome 56068 autoengine mem REG 253,0 20040 8390440 /usr/lib64/libplc4.so chrome 56068 autoengine mem REG 253,0 32304 8391190 /usr/lib64/libffi.so.6.0.1 chrome 56068 autoengine mem REG 253,0 402384 8390420 /usr/lib64/libpcre.so.1.2.0 chrome 56068 autoengine mem REG 253,0 15512 8390474 /usr/lib64/libXau.so.6.0.0 chrome 56068 autoengine mem REG 253,0 2127336 8390350 /usr/lib64/libc-2.17.so chrome 56068 autoengine mem REG 253,0 88720 8389122 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 chrome 56068 autoengine mem REG 253,0 166328 9027824 /usr/lib64/libgdk_pixbuf-2.0.so.0.3600.5 chrome 56068 autoengine mem REG 253,0 761616 9555002 /usr/lib64/libgdk-3.so.0.2200.10 chrome 56068 autoengine mem REG 253,0 7466528 9555004 /usr/lib64/libgtk-3.so.0.2200.10  
I need to filter multivalue of strings by substring/character. field coordinatorsID: a##1, b##2, c##3, d##3 field expertiseLevel can be one of  "1", "2", "3" Exmple: expertiseLevel = "3" r... See more...
I need to filter multivalue of strings by substring/character. field coordinatorsID: a##1, b##2, c##3, d##3 field expertiseLevel can be one of  "1", "2", "3" Exmple: expertiseLevel = "3" result ->  c##3, d##3 What I tried: attempt1:  | eval coordinatorsID_filtered = mvfilter(like(coordinatorsID,"%$expertiseLevel$"))            error attempt2: | eval expertiseLevel = case(expertiseLevel == "1", "%1", expertiseLevel == "2", "%2", expertiseLevel == "3", "%3")  | eval coordinatorsID_filtered = mvfilter(like(coordinatorsID,$expertiseLevel$))                   null  
I am seeing x509 certificate error on splunkd.log, I will like to know if I can turn off the SSL certificate feature off and what is the Splunk cloud configuration file and can I find it or what is t... See more...
I am seeing x509 certificate error on splunkd.log, I will like to know if I can turn off the SSL certificate feature off and what is the Splunk cloud configuration file and can I find it or what is the way to resolve the x509 certificate error. 
Hi Folks, Can someone help me on the below. I have the below message in the log and need to extract the time portion alone. [END~~Method.Search_Button] Response Time Search_Button: 00:00:00.3004626... See more...
Hi Folks, Can someone help me on the below. I have the below message in the log and need to extract the time portion alone. [END~~Method.Search_Button] Response Time Search_Button: 00:00:00.3004626 The output I need is, 00:00:00.3004626 Also if possible, can suggest how I can get the value as millisecond? For the above message, I should get 300ms. If the output is "00:00:01.250" then i should get as 1250ms. Thanks.
Hello, I submitted my Splunk Add-on for Cloud vetting. it is failed on both "VICTORIA" and "CLASSIC", however no errors or failures were found in the compatibility report: Failures 0 ... See more...
Hello, I submitted my Splunk Add-on for Cloud vetting. it is failed on both "VICTORIA" and "CLASSIC", however no errors or failures were found in the compatibility report: Failures 0 Warnings 11 Errors 0 Not Applicable 42 Manual Checks 17 Skipped 0 Successes 143 I used the Splunk Add-on Builder and did not add any libraries myself. All warnings and manual checks are from within the add-on builder - aob_py3. example:  Possible insecure HTTP Connection. Match: urllib2.urlopen Positional arguments, ["?"]; Keyword arguments, {"null": "?"} File: bin/ta_island_add_on_for_splunk/aob_py3/jsonschema/compat.py Line Number: 36 How can I understand why the vetting failed? How can I use the add-on builder to build an add-on that passes the cloud vetting? Thank you, Omer.
hi all, I am considering updating our index retention policy. However, I am not sure how to choose the maximum possible allocated space. We have a few indexes and one of them takes about half of the ... See more...
hi all, I am considering updating our index retention policy. However, I am not sure how to choose the maximum possible allocated space. We have a few indexes and one of them takes about half of the total index volume. We would like to keep the data for as long as possible, however have limited storage. For simplicity, let's say we have 1 TB storage and a single instance, 10 indexes. As far as I understood, it would be best to choose MaxTotalDataSizeMB to set the max MB per index. However, I can't divide the space of 1TB per index as only some of the space can be taken up by indexed data. So my questions are: 1) How should I choose what the MaxTotalDataSizeMB per index is? 2) How can I use to the maximum server storage without getting Splunk problems?  3) Is it reasonable to calculate the total index storage by looking at the total storage outside of  /opt/splunk/var/lib directory and then deciding how much storage can be allocated to indexes? What approach do you recommend?  4) What approach would you recommend in my case? Is it reasonable to keep data for as long as possible and are there reasons for avoiding this approach?
hi, Can someone help to correct the query provided below which will send alert if detected a STOPPED status for 3 consecutive times within a specific time range like for ex. from 7am-8pm.   ... See more...
hi, Can someone help to correct the query provided below which will send alert if detected a STOPPED status for 3 consecutive times within a specific time range like for ex. from 7am-8pm.   index="hcg_ph_t2_bigdataservices_prod" sourcetype="be:streaming-services" | search kafka_count="STOPPED" | stats count by _time,sourcetype,STOPPED | sort count desc | eval threshold=3 | where count >=threshold  
I have a systemout.log file and I am indexing using pretrained sourcetype websphere_trlog_sysout. Currently there is an issue with masking I have created props.conf in the deployment app and deploy... See more...
I have a systemout.log file and I am indexing using pretrained sourcetype websphere_trlog_sysout. Currently there is an issue with masking I have created props.conf in the deployment app and deployed to universal forwarder as below. Created transforms.conf to mask the data. Seems the issue is still same.    [websphere_trlog_sysout] TIME_FORMAT = %d-%m-%y %H:%M:%S:%3Q %Z TIME_PREFIX = ^\[ FORMAT = $1-$2-$3 $4:$5:$6:$7 $8 TRANSFORMS-anonymize = session-anonymizer [session-anonymizer] REGEX = XXXXX FORMAT = $1XXXX$2 DEST_KEY = _raw  
Can anyone help with understanding the latest openssl vulnerability CVE-2022-0778 and how/if it affects Splunk installations? https://www.cve.org/CVERecord?id=CVE-2022-0778
Hi Team, I have recently created trail account. I have received login details from splunk(admin user). Can i go-ahead using the same, just want to get confirmation. Thanks, Venkat
Hello - How do I check supplier creation date in Buying Inspector.
Dear professionals, I have a search string like this index="hcg_oapi_prod" relatedPersons NOT (firstName OR middleName OR lastName) | regex "\"relatedPersons\":\[\]" And this is the result. Th... See more...
Dear professionals, I have a search string like this index="hcg_oapi_prod" relatedPersons NOT (firstName OR middleName OR lastName) | regex "\"relatedPersons\":\[\]" And this is the result. The results have "bankAccount" value. Then I add timechart like this index="hcg_oapi_prod" relatedPersons NOT (firstName OR middleName OR lastName) | regex "\"relatedPersons\":\[\]" |timechart span=1m count as today |fields today How can I add a column for bankAccount when today = 1? Thank you
Splunkbaseに登録のアカウントについて、 登録情報を変更したいのですが、変更可能でしょうか? 会社名、メールアドレスの変更を希望します。 変更できない場合、他の対応方法をご教示いただけますでしょうか?
As the title suggests, I am trying to copy Splunk native user accounts from a standaslone SH to a search head cluster and I only need to migrate specific user accounts, for this, I am planning to cop... See more...
As the title suggests, I am trying to copy Splunk native user accounts from a standaslone SH to a search head cluster and I only need to migrate specific user accounts, for this, I am planning to copy the respective line from the /etc/passwd file of the standlone SH to the SHC. However, after checking out this doc it appears if I copy the lines to the existing /etc/passwd file on a SHC member, it won't replicate it. I would have to push the file from the deployer. However, I am not exactly sure how should I do it from the deployer ?
嗨 Splunk     时间字段提取错误,我想用第一次alert_at来搜索结果,这个怎么改
Hi ,   Is there a way to migrate current running config from local account to some other account (AD)? How to accomplish the same
Hi Splunk, Currently we are using Splunk v6.6.3 in our environment, So is there any possible to upgrade version from 6.6.3 to 8.x. If not what is procedure to follow to upgrade to Splunk v8.x Kin... See more...
Hi Splunk, Currently we are using Splunk v6.6.3 in our environment, So is there any possible to upgrade version from 6.6.3 to 8.x. If not what is procedure to follow to upgrade to Splunk v8.x Kindly given me some suggestion on this.  
  index="***" sourcetype="xaxd:*****" "GrantContributorAccess" "Assigned Contributor role to user" | rex field=Message "\[****=(?<accessId>.*?)\] - Assigned Contributor role to user (?<customerEmail... See more...
  index="***" sourcetype="xaxd:*****" "GrantContributorAccess" "Assigned Contributor role to user" | rex field=Message "\[****=(?<accessId>.*?)\] - Assigned Contributor role to user (?<customerEmail>.*?) for customerId=(?<customerId>.*?) in directoryName=(?<azureDirectory>.*?) in subscriptionId=(?<subscriptionId>.*?)$" | stats max(_time) as LATEST_ASSIGN by customerEmail | eval LATEST_ASSIGN=strftime(LATEST_ASSIGN,"%Y-%m-%d %H:%M:%S") | map maxsearches=1000 search="search index="***" sourcetype="xaxd:*****" "RevokeContributorAccess" "Deleting user $customerEmail$" earliest=$LATEST_ASSIGN$" | rex field=Message "\[RevokeContributorAccess=(?<accessId>.*?)\] - Deleting user (?<customerEmail>.*?) from AzureAD$" | stats max(_time) as LATEST_REVOKE by customerEmail | eval LATEST_REVOKE=strftime(LATEST_REVOKE,"%Y-%m-%d %H:%M:%S")   I want to use the field "LATEST_ASSIGN" in the mapping subqueries as the "earliest" time for them.  Please help. Thanks in advance.  Prem