All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My splunk index have a trouble  Total License Usage 90% Near Daily Quota how to solve it pls ?  
Hi, I am working on a POC project. I need to add timestmap command to use predict algorithm. My dataset however has the below fields. MonthYear Year Quarter Month Subscription ID ... See more...
Hi, I am working on a POC project. I need to add timestmap command to use predict algorithm. My dataset however has the below fields. MonthYear Year Quarter Month Subscription ID Subscription name   I was looking for a solution to randomly add timestamps for a period of one year. I have data from March2023 till May2023.   March 2023   Qtr 1 March 020b3b0c-5b0a-41a1-8cd7-90cbd63e06 SUB-PRD-EU-EDLCONTACTS March 2023   Qtr 1 March 020b3b0c-5b0a-41a1-8cd7-90cbd63e06 SUB-PRD-EU-EDLCONTACTS  
Hi, I am trying to look up data related to EventCode="4662", but it does not show in Splunk. Additionally I checked inputs.conf on the indexer and it was not present, I copied inputs.conf from defa... See more...
Hi, I am trying to look up data related to EventCode="4662", but it does not show in Splunk. Additionally I checked inputs.conf on the indexer and it was not present, I copied inputs.conf from default: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:\s+(?!groupPolicyContainer)" index = wineventlog renderXml=false I have check within Windows Event Viewer on our Domain Controller that Event 4662 is present, but Splunk searches for EventCode=4662 produce no results. so what i want to see is the event code 4662 that in it's message contain Object Type: user Here i will provide the event viewer logs that i want splunk to show An operation was performed on an object. Subject :   Security ID:       CIMBNIAGA\YT91504X   Account Name:      YT91504X   Account Domain:    CIMBNIAGA   Logon ID:          0xC2D9E1AC Object:   Object Server: DS   Object Type:   user   Object Name:   CN=ADJOINADMIN,OU=Functional   ID,OU=Special_OU,DC=cimbniaga,DC=co,DC=id   Handle ID:     0x0 Operation:   Operation Type:  Object Access   Accesses:      READ_CONTROL   Access Mask:   0x20000   Properties:    READ_CONTROL {bf967aba-0de6-11d0-a285-00aa003049e2} Additional Information:   Parameter 1:    -   Parameter 2:   Please help me i really got stuck i already try to delete the blacklist filtering but it's still not give me  the log that i want just like in the top @kheo_splunk
I had data like this in Splunk. DT=2023-09-13T23:59:56.029-0500|LogId=WFTxLog|AppId=SWBS|AppInst=server1:/apps/comp/swbs/instances/99912|TId=executorWithCallerRunsPolicy-30|ProcId=dc47cf25-2318-4f61... See more...
I had data like this in Splunk. DT=2023-09-13T23:59:56.029-0500|LogId=WFTxLog|AppId=SWBS|AppInst=server1:/apps/comp/swbs/instances/99912|TId=executorWithCallerRunsPolicy-30|ProcId=dc47cf25-2318-4f61-bd33-10f8928916ce|RefMsgNm=creditDecisionInformation2011_02|MsgId=fc1935b6-06c0-42bb-89d1-caf1076fff68|SrcApp=XX|ViewPrefs=CUSTOMER_202108-customerDetailIndicator,customerRelationshipDailyUpdateIndicator,customerRelationshipMonthlyUpdateIndicator,customerRelationshipSummaryIndicator,customerRiskSummaryIndicator~ACCOUNT_201505-accountOverLimitIndicator,creditChargeoffIndicator,creditDelinquencyIndicator,creditLineDecreaseIndicator,creditLineRestrictionIndicator,creditProductNSFIndicator,depositClosedForCauseIndicator,depositHardHoldIndicator,forcedClosedIndicator,nonExpansionQueueIndicator,outOfBoundIndicator,overdraftOnDepositIndicator,returnedDepositItemsIndicator,disasterReliefIndicator~APPLICANT_201111-applicationDetailIndicator,incomeDetailIndicator~MULTIPLE_ACCOUNT_CUSTOMER-riskRecommendationIndicator~CUSTOMER_CLI_201705-customerCLIDataIndicator|TxAct=Response Received|DestId=EIW|MsgNm=customerRltnshipMonthlySummaryTaxId|Elapsed=199|AppBody=SOR_RESP| I had a query like this. index=wf_pvsi_other wf_id=swbs wf_env=prod sourcetype="wf:swbs:profiling:txt" AppBody=SOR_RESP DestId=EIW | table SrcApp, SubApp, RefMsgNm, DestId, MsgNm | fillnull value=NA SubApp | top SrcApp, SubApp, RefMsgNm, DestId, MsgNm limit=100 | rename SrcApp as Channel, SubApp as "Sub Application", RefMsgNm as Message, DestId as SOR, MsgNm as "SOR Message" | fields Channel, "Sub Application", Message,SOR,"SOR Message",count | sort Channel,"Sub Application", Message,SOR, "SOR Message", count Now, my app moved to cloud and the log data is embedded within JSON and event show up in Splunk like this. Application specific log data is present as a value of JSON field "msg". {"cf_app_id":"75390614-95dc-474e-ad63-2358769b0641","cf_app_name":"CA00000-app-uat","cf_org_id":"7cc80b1c-0453-4487-ba19-4e3ffc868cf3","cf_org_name":"CA00000-test","cf_space_id":"dd35773f-63cb-4ed3-8199-2ae2ff1331f8","cf_space_name":"uat","deployment":"cf-ab9ba4f5a0f082dfc130","event_type":"LogMessage","ip":"00.00.000.00","job":"diego_cell","job_index":"77731dca-f4e8-4079-a97d-b65f20911c53","message_type":"OUT","msg":"DT=2023-09-14T21:41:52.638-0500|LogId=WFTxLog|AppId=SWBS|AppInst=ff664658-a378-42f4-4a4d-2330:/home/vcap/app|TId=executorWithCallerRunsPolicy-7|ProcId=92d42ef2-7940-48e8-b4df-1f92790b657e|RefMsgNm=creditDecisionInformation2011_02|MsgId=8f5a46f8-7288-442f-9f56-ecaa05b345af|SrcApp=XX|ViewPrefs=CUSTOMER_201605-customerDetailIndicator,customerRiskSummaryIndicator~ACCOUNT_201505-accountOverLimitIndicator,creditChargeoffIndicator,creditDelinquencyIndicator,creditLineDecreaseIndicator,creditLineRestrictionIndicator,creditProductNSFIndicator,depositClosedForCauseIndicator,depositHardHoldIndicator,forcedClosedIndicator,foreclosureIndicator,loanModifcationPrgrmIndicator,nonExpansionQueueIndicator,outOfBoundIndicator,overdraftOnDepositIndicator,returnedDepositItemsIndicator,SSDILPrgrmIndicator|TxAct=Response Received|DestId=EIW|MsgNm=customerRiskSummaryTaxId|Elapsed=259|AppBody=SOR_RESP|","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","tags":{"instance_id":"0","process_instance_id":"ff664658-a378-42f4-4a4d-2330","space_id":"dd35773f-63cb-4ed3-8199-2ae2ff1331f8","space_name":"uat"},"timestamp":1694745712641480200} I want Splunk query output to present like earlier. How to do this? Any suggestions?  
Is it possible to have WarmData stored partially on local indexers' storage and partially on remote storage? My total retention period is 90 days but I would like to keep data for the first 45 days... See more...
Is it possible to have WarmData stored partially on local indexers' storage and partially on remote storage? My total retention period is 90 days but I would like to keep data for the first 45 days on local indexers as warm buckets and then send the warm buckets to AWS S3 bucket to store for another 45 days. I am using the below settings and the data sent to the S3 afrer 90 days and is stored in frozen instead of warm or cold. How can I set warm or cold data in S3? [default] remotePath = volume:remote_store/$_index_name repFactor = auto frozenTimePeriodInSecs = 7776000 If I change the frozenTimePeriodInSecs = 1555200 (45 days) the data will be sent to S3 after 45 days but it will be sent in frozen buckets. I need to send data either as warm or cold buckets. If I setup  [volume:local_store] maxVolumeDataSizeMB = 1000000 That will send data to S3 only after filling 1 TB in local storage.  How can I maintain a time based storage with 45 days on local storage and 45 days on remote storage?
Good day, I have this SPL: index=test_7d sourcetype="Ibm:BigFix:CVE" earliest=-1d | search FixletSourceSeverityTxt="Critical" OR FixletSourceSeverityTxt="Urgent" | eval ReleaseDtm=strptim... See more...
Good day, I have this SPL: index=test_7d sourcetype="Ibm:BigFix:CVE" earliest=-1d | search FixletSourceSeverityTxt="Critical" OR FixletSourceSeverityTxt="Urgent" | eval ReleaseDtm=strptime(FixletSourceReleaseDtm, "%Y-%m-%d") | where ReleaseDtm>1672531199 ```Epoch time for 12/31/22 23:59:59, this gets only 2023 fixlets``` | eval _time = ReleaseDtm | makemv FixletCVETxt | eval cvecount=mvcount(FixletCVETxt) | eval cvetotal=cvecount * RemediatedComputerCount | timechart span=1mon sum(cvetotal) as cvetotal sum(RemediatedComputerCount) as RemediatedComputerCount by ComputerDatabaseId     That has the output of:   _time RemediatedComputerCount: 0 RemediatedComputerCount: 1 cvetotal: 0 cvetotal: 1 2023-01 113575 29512 7177373 1619396 2023-02 230629 42239 4779801 1063153 2023-03 253770 55769 7246548 1792831 2023-04 139419 36156 8175076 2192272 2023-05 144117 40335 2781698 740369 2023-06 314684 81621 5574141 1420760 2023-07 163672 47112 13643474 3957452 2023-08 281891 65004 6481422 1587176 2023-09 4596 6508 62384 98312   I would like to add the 'RemediatedComputerCount: #' columns to a total column by month next to the existing columns.  Then add another column with totals for 'cvetotal: #'.  I don't want totals of each column for a grand total at the bottom, just the total of the like columns in a new one by _time. Suggestions?  Thanks!
Hello, Does Tenable not send remediated vulnerabilities to Splunk after it has reported it once? The situation is as follows: A Host ABC had it's CVE-1234-5678 patched in April 2023, for which th... See more...
Hello, Does Tenable not send remediated vulnerabilities to Splunk after it has reported it once? The situation is as follows: A Host ABC had it's CVE-1234-5678 patched in April 2023, for which there is a record in the index. But after that there is not a single time that the Remediated vulnerability has been reported. It only reports on the open ones from there on. I tried enabling the "Historical reporting of remediated vulnerabilities" - but that still isn't helping. As a result, we consider that host to have the vulnerability as "Open". Is this the expected behaviour? I thought this setting would report the remediated vulnerabilities each time the scan runs?
Hi, I'm trying to put together some search queries for some common anomaly detection. I've been trying to find ones for these issues and I seem to come up with nothing.    Some common ones thoug... See more...
Hi, I'm trying to put together some search queries for some common anomaly detection. I've been trying to find ones for these issues and I seem to come up with nothing.    Some common ones though would be: Drastic change in events per second Drastic change in size of events Blocked outputs Change of health of inputs? Dropped events CPU percentage use that is REALLY high (percentage?)    
I'm working with a custom TA, AlertAction_SFTP, that has the following .conf.spec file.   [my_sftp_alert_action] param.sftp_server = <string> param.sftp_user = <string> param.sftp_rfile = <string> ... See more...
I'm working with a custom TA, AlertAction_SFTP, that has the following .conf.spec file.   [my_sftp_alert_action] param.sftp_server = <string> param.sftp_user = <string> param.sftp_rfile = <string> param.sftp_key = <string> param.ssh_key_dir = <string> param.sftp_password = <string>   When I try to use $date$ in the file name, filename-$date$, I get "Remote path is invalid."  I've tried multiple ways of doing this including adding date to my search   index=vuln sourcetype="qualys:hostDetection" signature="SMB Version 1 Enabled" TAGS="*Server*" earliest=-1d@d latest=@d | eval date=strftime(relative_time(now(), "-1d@d"), "%Y-%m-%d") | table date, *   I've tried $results.date$, $date$, and a couple of other things.  Is there some reason that the rfile path must not use a Spunk variable? TIA Joe
Do you happen to know if Cisco Meraki syslog, especially Flows and URLs have bytes in and bytes out? We're logging Meraki and there's no field whatsoever for bytes. Is it something that can be config... See more...
Do you happen to know if Cisco Meraki syslog, especially Flows and URLs have bytes in and bytes out? We're logging Meraki and there's no field whatsoever for bytes. Is it something that can be configured from Meraki logger console? Or the actual solution itself don't record that?
Hello -  I am trying to script the installation for the Mac Splunk Universal Forwarder package.  The package is a disk image (.dmg). I understand that we can mount the image using hidutil and a... See more...
Hello -  I am trying to script the installation for the Mac Splunk Universal Forwarder package.  The package is a disk image (.dmg). I understand that we can mount the image using hidutil and access the volume to find the .pkg file.  The issue comes from where we attempt to run installer -pkg volume/splunkuf.pgk -target /Applications/SplunkUf/ the end user is prompted to answer dialog boxes, which we do not want to occur.   Is there a switch to use to install the pkg file silently? TIA JH
Hello, I am trying to find the dates  when the host stopped sending logs to splunk in the last 6 months.I have used the below search but can only find the earliest and latest indexed time. Just ... See more...
Hello, I am trying to find the dates  when the host stopped sending logs to splunk in the last 6 months.I have used the below search but can only find the earliest and latest indexed time. Just wanted to know the dates as well when the host stopped sending logs. | tstats count as totalcount earliest(_time) as firstTime latest(_time) as lastTime where index=linux host=xyz by host | fieldformat firstTime=strftime(firstTime,"%Y-%m-%d %H:%M:%S") | fieldformat lastTime=strftime(lastTime,"%Y-%m-%d %H:%M:%S") Thanks
Hi,  Is there any specific fields in the vpn logs we can exclude, which are not needed for investigation so that we can save the license cost ??
Hi,  Can anyone please help advise there is any quick way to find list of servers not managed by deployment server but transmitting data to Splunk on deployment server or SH itself.  Thanks
We use an asset file correctly configured on ES but we noticed that the enrichment based on "asset_lookup_by_cidr" is not working correctly because the lookup is not sorted by CIDR class. For example... See more...
We use an asset file correctly configured on ES but we noticed that the enrichment based on "asset_lookup_by_cidr" is not working correctly because the lookup is not sorted by CIDR class. For example in the following sample the sorting is base on "lexicographic" order instead of the real CIDR classes logic: 1.2.30.0/26 1.2.30.128/25 1.2.31.0/24 1.2.32.0/24 1.2.33.0/25 1.2.33.128/25 We tried to solve the problem creating a saved search that automatically performs the right sort but soon after the execution the lookup "asset_lookup_by_cidr" is replaced with "lexicographic" order. My saved search: | inputlookup asset_lookup_by_cidr | eval ip=replace(ip,"\s+","") | eval sorted=case(match(ip,"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\/\d{2}"),substr(ip,-2),match(ip,"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\/\d{1}"),substr(ip,-1),1=1,"0") | sort limit=0 - sorted | fields - sorted | outputlookup asset_lookup_by_cidr Is there a quick solution to this problem? Because it is a big trouble for notable based on IP addresses.
How to create top 10 DB queries in AppD dashboards and reports.
Hi all, there is a way to disable the audience restriction verification on SAML response? because in our case, base on Siteminder configuration, is the only way to resolve. Thank you!
Hi everyone I got a question regarding the configuration of the app Microsoft Teams Add-on for Splunk. When I configure a Webhook, a TeamsSubscription, and a CallRecord according to this guide, M... See more...
Hi everyone I got a question regarding the configuration of the app Microsoft Teams Add-on for Splunk. When I configure a Webhook, a TeamsSubscription, and a CallRecord according to this guide, MS Teams data flow into my Splunk instance. Just like the guide suggests, I use ngrok since the server my Splunk instance is running on is not accessible via HTTPS. Ngrok is fine for testing, but I want to switch it out for my actual proxy server. I tried several different settings, but there is no more data coming in. Given that data came in for as long as I used ngrok, all settings related to Azure (Tenant ID, Client ID, Client Secret) must be correct. The issue lies somewhere in the proxy server settings. Can anyone share some insights on how to configure the MS Teams Add-on as well as proxy server settings? Here is my current setup. Webhook - Name: Webhook - Interval: 30 - Index: ms_teams - Port: 4444 Subscription - Name: Subscription - Interval: 86400 - Index: ms_teams - Global Account: MSAzure - Tenant ID: mytenantidfromazure - Environment: Public - Webhook URL: myproxy.server.com <------- splunkinstanceserver.com:4444 or myproxy.server.com? - Endpoint: v1.0 CallRecord - Name: CallRecord - Interval: 30 - Index: ms_teams - Global Account: MSAzure - Tenant ID: mytenantidfromazure - Environment: Public - Endpoint: v1.0 - Max Batch Site: 5000 Proxy - Enable: checked - Host: myproxyserver.com - Port: 4444  <--------- Is this meant to be the port of my webhook or where my proxy takes https requests? - Username: userformyproxyserver - PW: userpwformyproxyserver splunkd.log ***Paths are shortened for readability. .../TA_MS_Teams/bin/TA_MS_Teams_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last): .../TA_MS_Teams/bin/TA_MS_Teams_rh_settings.py persistent}: File ".../TA_MS_Teams/bin/ta_ms_teams/aob_py3/solnlib/utils.py", line 148, in wrapper .../TA_MS_Teams/bin/TA_MS_Teams_rh_settings.py persistent}: return func(*args, **kwargs) .../TA_MS_Teams/bin/TA_MS_Teams_rh_settings.py persistent}: File ".../TA_MS_Teams/bin/ta_ms_teams/aob_py3/solnlib/credentials.py", line 128, in get_password .../TA_MS_Teams/bin/TA_MS_Teams_rh_settings.py persistent}: "Failed to get password of realm=%s, user=%s." % (self._realm, user) .../TA_MS_Teams/bin/TA_MS_Teams_rh_settings.py persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#TA_MS_Teams#configs/conf-ta_ms_teams_settings, user=proxy.
Hi, I'm trying to set a specific color to each one of 4 my dynamic labels of my 3 trellis pie charts. I already added series color option :  <option name="charting.seriesColors">[#CFD6EA,#C45AB3,#7... See more...
Hi, I'm trying to set a specific color to each one of 4 my dynamic labels of my 3 trellis pie charts. I already added series color option :  <option name="charting.seriesColors">[#CFD6EA,#C45AB3,#735CDD,#8fba38]</option> My issue is that my labels are "dynamic" and also  I don't  have a constant  number of categories ( it changes in each chart between 0-4 according to the data i receive). So my color plate sequence not aligned with the number of categories. For example, I want to set the flowing: type_A - Red type_B- Blue type_C- Green The problem is that  sometimes category "type_A" is missing from one or more of my charts and the category type_B is getting its color (Red) instead of Blue. here is my query: <---Search--> | stats dc(sessions) as Number_of_Sessions by type | sort type | eval type = type." - ".Number_of_Sessions I will very a appreciate any help i get get :-0) 10Q
I use the Splunk Machine Learning command: | fit LinearRegression blah, blah into ModelName I can generate a ModelName file. Using the command  | summary ModelName I can generate a result s... See more...
I use the Splunk Machine Learning command: | fit LinearRegression blah, blah into ModelName I can generate a ModelName file. Using the command  | summary ModelName I can generate a result set that has feature and coefficient fields. How can I "extract" the numerical coefficients so that I can create a regression equation for future use? Example: I'm trying to create the equation y = c0 + c1 * Term1 + c2 * Term2 for a future modeling activity?