All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

i @mohsplunking , if you ne only an alert, as I said, the solution from @bowesmana is perferct and you don't need any additional command. the eval/case  could be useful if you need to display some ... See more...
i @mohsplunking , if you ne only an alert, as I said, the solution from @bowesmana is perferct and you don't need any additional command. the eval/case  could be useful if you need to display some additional information e.g. a level of alert quantity. Ciao. Giuseppe
Hi @love0sxy , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I had data like this in Splunk. DT=2023-09-13T23:59:56.029-0500|LogId=WFTxLog|AppId=SWBS|AppInst=server1:/apps/comp/swbs/instances/99912|TId=executorWithCallerRunsPolicy-30|ProcId=dc47cf25-2318-4f61... See more...
I had data like this in Splunk. DT=2023-09-13T23:59:56.029-0500|LogId=WFTxLog|AppId=SWBS|AppInst=server1:/apps/comp/swbs/instances/99912|TId=executorWithCallerRunsPolicy-30|ProcId=dc47cf25-2318-4f61-bd33-10f8928916ce|RefMsgNm=creditDecisionInformation2011_02|MsgId=fc1935b6-06c0-42bb-89d1-caf1076fff68|SrcApp=XX|ViewPrefs=CUSTOMER_202108-customerDetailIndicator,customerRelationshipDailyUpdateIndicator,customerRelationshipMonthlyUpdateIndicator,customerRelationshipSummaryIndicator,customerRiskSummaryIndicator~ACCOUNT_201505-accountOverLimitIndicator,creditChargeoffIndicator,creditDelinquencyIndicator,creditLineDecreaseIndicator,creditLineRestrictionIndicator,creditProductNSFIndicator,depositClosedForCauseIndicator,depositHardHoldIndicator,forcedClosedIndicator,nonExpansionQueueIndicator,outOfBoundIndicator,overdraftOnDepositIndicator,returnedDepositItemsIndicator,disasterReliefIndicator~APPLICANT_201111-applicationDetailIndicator,incomeDetailIndicator~MULTIPLE_ACCOUNT_CUSTOMER-riskRecommendationIndicator~CUSTOMER_CLI_201705-customerCLIDataIndicator|TxAct=Response Received|DestId=EIW|MsgNm=customerRltnshipMonthlySummaryTaxId|Elapsed=199|AppBody=SOR_RESP| I had a query like this. index=wf_pvsi_other wf_id=swbs wf_env=prod sourcetype="wf:swbs:profiling:txt" AppBody=SOR_RESP DestId=EIW | table SrcApp, SubApp, RefMsgNm, DestId, MsgNm | fillnull value=NA SubApp | top SrcApp, SubApp, RefMsgNm, DestId, MsgNm limit=100 | rename SrcApp as Channel, SubApp as "Sub Application", RefMsgNm as Message, DestId as SOR, MsgNm as "SOR Message" | fields Channel, "Sub Application", Message,SOR,"SOR Message",count | sort Channel,"Sub Application", Message,SOR, "SOR Message", count Now, my app moved to cloud and the log data is embedded within JSON and event show up in Splunk like this. Application specific log data is present as a value of JSON field "msg". {"cf_app_id":"75390614-95dc-474e-ad63-2358769b0641","cf_app_name":"CA00000-app-uat","cf_org_id":"7cc80b1c-0453-4487-ba19-4e3ffc868cf3","cf_org_name":"CA00000-test","cf_space_id":"dd35773f-63cb-4ed3-8199-2ae2ff1331f8","cf_space_name":"uat","deployment":"cf-ab9ba4f5a0f082dfc130","event_type":"LogMessage","ip":"00.00.000.00","job":"diego_cell","job_index":"77731dca-f4e8-4079-a97d-b65f20911c53","message_type":"OUT","msg":"DT=2023-09-14T21:41:52.638-0500|LogId=WFTxLog|AppId=SWBS|AppInst=ff664658-a378-42f4-4a4d-2330:/home/vcap/app|TId=executorWithCallerRunsPolicy-7|ProcId=92d42ef2-7940-48e8-b4df-1f92790b657e|RefMsgNm=creditDecisionInformation2011_02|MsgId=8f5a46f8-7288-442f-9f56-ecaa05b345af|SrcApp=XX|ViewPrefs=CUSTOMER_201605-customerDetailIndicator,customerRiskSummaryIndicator~ACCOUNT_201505-accountOverLimitIndicator,creditChargeoffIndicator,creditDelinquencyIndicator,creditLineDecreaseIndicator,creditLineRestrictionIndicator,creditProductNSFIndicator,depositClosedForCauseIndicator,depositHardHoldIndicator,forcedClosedIndicator,foreclosureIndicator,loanModifcationPrgrmIndicator,nonExpansionQueueIndicator,outOfBoundIndicator,overdraftOnDepositIndicator,returnedDepositItemsIndicator,SSDILPrgrmIndicator|TxAct=Response Received|DestId=EIW|MsgNm=customerRiskSummaryTaxId|Elapsed=259|AppBody=SOR_RESP|","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","tags":{"instance_id":"0","process_instance_id":"ff664658-a378-42f4-4a4d-2330","space_id":"dd35773f-63cb-4ed3-8199-2ae2ff1331f8","space_name":"uat"},"timestamp":1694745712641480200} I want Splunk query output to present like earlier. How to do this? Any suggestions?  
I have opened a case and the Splunk Engineering confirmed that there is a bug due to a code change since 9.0.5 version. The workaround for now is to assign the user with "admin_all_objects" but it i... See more...
I have opened a case and the Splunk Engineering confirmed that there is a bug due to a code change since 9.0.5 version. The workaround for now is to assign the user with "admin_all_objects" but it is very similar to administrative privileges.  
We have also started getting this issue after updating Splunk from 8.2 to 9.1. When I change user role to Admin, they can upload data, when I change back to previous role with the mentioned permissi... See more...
We have also started getting this issue after updating Splunk from 8.2 to 9.1. When I change user role to Admin, they can upload data, when I change back to previous role with the mentioned permissions, they get the same error "User is not allowed to modify the job".  This must be a change in one of the roles in the update, but I'm just trying to figure out which permission is needed for this. My user has the following permissions which is what I read is required to add data: edit_monitor indexes_edit edit_tcp search But still no luck. Will update if I find the specific permission required.
Is it possible to have WarmData stored partially on local indexers' storage and partially on remote storage? My total retention period is 90 days but I would like to keep data for the first 45 days... See more...
Is it possible to have WarmData stored partially on local indexers' storage and partially on remote storage? My total retention period is 90 days but I would like to keep data for the first 45 days on local indexers as warm buckets and then send the warm buckets to AWS S3 bucket to store for another 45 days. I am using the below settings and the data sent to the S3 afrer 90 days and is stored in frozen instead of warm or cold. How can I set warm or cold data in S3? [default] remotePath = volume:remote_store/$_index_name repFactor = auto frozenTimePeriodInSecs = 7776000 If I change the frozenTimePeriodInSecs = 1555200 (45 days) the data will be sent to S3 after 45 days but it will be sent in frozen buckets. I need to send data either as warm or cold buckets. If I setup  [volume:local_store] maxVolumeDataSizeMB = 1000000 That will send data to S3 only after filling 1 TB in local storage.  How can I maintain a time based storage with 45 days on local storage and 45 days on remote storage?
According to the method you mentioned, the problem has been successfully solved. Thank you very much
Use the addtotals command which will sum up the fields you specify - it supports wildcards   | addtotals RemediatedComputerCount* fieldname=RemediatedComputerCount | addtotals cvetotal* fieldname=c... See more...
Use the addtotals command which will sum up the fields you specify - it supports wildcards   | addtotals RemediatedComputerCount* fieldname=RemediatedComputerCount | addtotals cvetotal* fieldname=cvetotal    
When you said you did this I added a new lookup by uploading the CSV file by going to Lookups » Lookup table files » Add new the CSV file was uploaded to this this directory and I can change the per... See more...
When you said you did this I added a new lookup by uploading the CSV file by going to Lookups » Lookup table files » Add new the CSV file was uploaded to this this directory and I can change the permission When you upload the CSV the list of lookups will show your lookup as private  you can then change the permissions to app, which MOVES the file system location of that file to the location inside the app.   Then when you run outputlookup, you will be updating the one in the app folder, which you have previously set to app permission.      
@mohsplunking  index=your_source_index status>=400 status<600 | stats count by ip | where count>100 or you can do  index=your_source_index status>=400 status<600 | top ip | where count > 100 but... See more...
@mohsplunking  index=your_source_index status>=400 status<600 | stats count by ip | where count>100 or you can do  index=your_source_index status>=400 status<600 | top ip | where count > 100 but I would prefer stats over top  
Did you ever find out how to do this? I'm in a similar situation and was hoping you were able to find a workaround?
Thank you @elizabethl_splu . very much appreciate your response. Will keep an eye on the release notes.
Good day, I have this SPL: index=test_7d sourcetype="Ibm:BigFix:CVE" earliest=-1d | search FixletSourceSeverityTxt="Critical" OR FixletSourceSeverityTxt="Urgent" | eval ReleaseDtm=strptim... See more...
Good day, I have this SPL: index=test_7d sourcetype="Ibm:BigFix:CVE" earliest=-1d | search FixletSourceSeverityTxt="Critical" OR FixletSourceSeverityTxt="Urgent" | eval ReleaseDtm=strptime(FixletSourceReleaseDtm, "%Y-%m-%d") | where ReleaseDtm>1672531199 ```Epoch time for 12/31/22 23:59:59, this gets only 2023 fixlets``` | eval _time = ReleaseDtm | makemv FixletCVETxt | eval cvecount=mvcount(FixletCVETxt) | eval cvetotal=cvecount * RemediatedComputerCount | timechart span=1mon sum(cvetotal) as cvetotal sum(RemediatedComputerCount) as RemediatedComputerCount by ComputerDatabaseId     That has the output of:   _time RemediatedComputerCount: 0 RemediatedComputerCount: 1 cvetotal: 0 cvetotal: 1 2023-01 113575 29512 7177373 1619396 2023-02 230629 42239 4779801 1063153 2023-03 253770 55769 7246548 1792831 2023-04 139419 36156 8175076 2192272 2023-05 144117 40335 2781698 740369 2023-06 314684 81621 5574141 1420760 2023-07 163672 47112 13643474 3957452 2023-08 281891 65004 6481422 1587176 2023-09 4596 6508 62384 98312   I would like to add the 'RemediatedComputerCount: #' columns to a total column by month next to the existing columns.  Then add another column with totals for 'cvetotal: #'.  I don't want totals of each column for a grand total at the bottom, just the total of the like columns in a new one by _time. Suggestions?  Thanks!
Hello, Does Tenable not send remediated vulnerabilities to Splunk after it has reported it once? The situation is as follows: A Host ABC had it's CVE-1234-5678 patched in April 2023, for which th... See more...
Hello, Does Tenable not send remediated vulnerabilities to Splunk after it has reported it once? The situation is as follows: A Host ABC had it's CVE-1234-5678 patched in April 2023, for which there is a record in the index. But after that there is not a single time that the Remediated vulnerability has been reported. It only reports on the open ones from there on. I tried enabling the "Historical reporting of remediated vulnerabilities" - but that still isn't helping. As a result, we consider that host to have the vulnerability as "Open". Is this the expected behaviour? I thought this setting would report the remediated vulnerabilities each time the scan runs?
Hi, I'm trying to put together some search queries for some common anomaly detection. I've been trying to find ones for these issues and I seem to come up with nothing.    Some common ones thoug... See more...
Hi, I'm trying to put together some search queries for some common anomaly detection. I've been trying to find ones for these issues and I seem to come up with nothing.    Some common ones though would be: Drastic change in events per second Drastic change in size of events Blocked outputs Change of health of inputs? Dropped events CPU percentage use that is REALLY high (percentage?)    
I'm working with a custom TA, AlertAction_SFTP, that has the following .conf.spec file.   [my_sftp_alert_action] param.sftp_server = <string> param.sftp_user = <string> param.sftp_rfile = <string> ... See more...
I'm working with a custom TA, AlertAction_SFTP, that has the following .conf.spec file.   [my_sftp_alert_action] param.sftp_server = <string> param.sftp_user = <string> param.sftp_rfile = <string> param.sftp_key = <string> param.ssh_key_dir = <string> param.sftp_password = <string>   When I try to use $date$ in the file name, filename-$date$, I get "Remote path is invalid."  I've tried multiple ways of doing this including adding date to my search   index=vuln sourcetype="qualys:hostDetection" signature="SMB Version 1 Enabled" TAGS="*Server*" earliest=-1d@d latest=@d | eval date=strftime(relative_time(now(), "-1d@d"), "%Y-%m-%d") | table date, *   I've tried $results.date$, $date$, and a couple of other things.  Is there some reason that the rfile path must not use a Spunk variable? TIA Joe
Try something like this | metasearch index=linux | timechart count by host useother=f | untable _time host count | where count=0
Do you happen to know if Cisco Meraki syslog, especially Flows and URLs have bytes in and bytes out? We're logging Meraki and there's no field whatsoever for bytes. Is it something that can be config... See more...
Do you happen to know if Cisco Meraki syslog, especially Flows and URLs have bytes in and bytes out? We're logging Meraki and there's no field whatsoever for bytes. Is it something that can be configured from Meraki logger console? Or the actual solution itself don't record that?
Hello -  I am trying to script the installation for the Mac Splunk Universal Forwarder package.  The package is a disk image (.dmg). I understand that we can mount the image using hidutil and a... See more...
Hello -  I am trying to script the installation for the Mac Splunk Universal Forwarder package.  The package is a disk image (.dmg). I understand that we can mount the image using hidutil and access the volume to find the .pkg file.  The issue comes from where we attempt to run installer -pkg volume/splunkuf.pgk -target /Applications/SplunkUf/ the end user is prompted to answer dialog boxes, which we do not want to occur.   Is there a switch to use to install the pkg file silently? TIA JH
Hello, I am trying to find the dates  when the host stopped sending logs to splunk in the last 6 months.I have used the below search but can only find the earliest and latest indexed time. Just ... See more...
Hello, I am trying to find the dates  when the host stopped sending logs to splunk in the last 6 months.I have used the below search but can only find the earliest and latest indexed time. Just wanted to know the dates as well when the host stopped sending logs. | tstats count as totalcount earliest(_time) as firstTime latest(_time) as lastTime where index=linux host=xyz by host | fieldformat firstTime=strftime(firstTime,"%Y-%m-%d %H:%M:%S") | fieldformat lastTime=strftime(lastTime,"%Y-%m-%d %H:%M:%S") Thanks