All Topics

Top

All Topics

How to extract fields which comes under message and failedRecords.
Hi, I've searched the forums and found one thread about getting Intune data in to Splunk which set me on a path, hopefully, to getting the data in. I'm working through the guidance here - Splunk Ad... See more...
Hi, I've searched the forums and found one thread about getting Intune data in to Splunk which set me on a path, hopefully, to getting the data in. I'm working through the guidance here - Splunk Add-on for Microsoft Cloud Services - Splunk Documentation I have requested and got an Event Hub from the team that look after Azure infrastructure. I've created the Azure app registration, set the API permissions, Access control on the Event Hub, and can see this is being successfully signed in to having configured the Splunk add-on side as well.  I'm not seeing any data come in to the index though. The one thing I'm unclear on and I haven't been able to work out the definitive answer to is whether I need an Azure storage account in order to store the date. My reading of the Event Hub configuration options suggested to me that it was capable of some form of retention to allow streaming elsewhere (e.g. setting the retention time) but perhaps that is me misinterpreting it.  Has anyone successfully got this working and if so, are you using a storage account with this? 
Need trial license for End user monitoring
Dear all, I have a list of latitude and longitude pairs from my observed events and try to get the corresponding street/city from latitude and longitude. I found this website  https://nominatim.... See more...
Dear all, I have a list of latitude and longitude pairs from my observed events and try to get the corresponding street/city from latitude and longitude. I found this website  https://nominatim.org/release-docs/develop/api/Reverse/ and the URL it provides to map from latitude and longitude to street/building: https://nominatim.openstreetmap.org/reverse?format=xml&lat=<my_lat>&lon=<my_lon>&zoom=18 ex. https://nominatim.openstreetmap.org/reverse?format=xml&lat=39.982302&lon=116.484438&zoom=18 => output is: 360building, xx streat, xx road, xx village, xx district, Beijing, 100015, China However, here is my raw data, more than 1000 records so it's impossible to input to URL manually:  log id Lat Lon 1 39.982302 116.484438 2 30.608684 114.418229   Does anyone know is there a way to treat the Lat and Lon as input parameter in the URL link by code ? Thank you.
Hi all, I'm attempting to exclude specific undesired data from the security logs. Is there a way to minimize the number of items on the exclusion list, considering that we can only add up to 10 it... See more...
Hi all, I'm attempting to exclude specific undesired data from the security logs. Is there a way to minimize the number of items on the exclusion list, considering that we can only add up to 10 items to the blacklist due to limitations. Is there a possibility of consolidating the blacklist by, for example, appending a "|" (pipe) at the end of each blacklisted item, thereby reducing the total number of entries in the blacklist while still achieving the desired exclusion? blacklist1 = EventCode="4662" Message="Object Type:(?!\s*(groupPolicyContainer|computer|user))" blacklist2 = EventCode="4648|5145|4799|5447|4634|5156|4663|4656|5152|5157|4658|4673|4661|4690|4932|4933|5158|4957|5136|4674|4660|4670|5058|5061|4985|4965" blacklist3 = EventCode="4688" Message="(?:New Process Name:).+(?:SplunkUniversalForwarder\\bin\\splunk.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunkd.exe)|.+(?:SplunkUniversalForwarder\\bin\\btool.exe)" blacklist4 = EventCode="4688" Message="(?:New Process Name:).+(?:SplunkUniversalForwarder\\bin\\splunk-winprintmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-powershell.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-regmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-netmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-admon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-MonitorNoHandle.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-winevtlog.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-perfmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-wmi.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk\-winhostinfo\.exe)" blacklist5 = EventCode="4688" Message="(?:Process Command Line:).+(?:system32\\SearchFilterHost.exe)|.+(?:find/i)|.+(?:WINDOWS\\system32\\conhost.exe)" renderXml=true
I am trying to configure SAML SSO for my Splunk Heavy Forwarders. When I upload and save SP Metadata file and remaining details. Its showing the below message on top, but configuration not getting sa... See more...
I am trying to configure SAML SSO for my Splunk Heavy Forwarders. When I upload and save SP Metadata file and remaining details. Its showing the below message on top, but configuration not getting saved in the backend and its not working.  "SAML has already been configured, Cannot add a new SAML configuration.saml" Your help is greatly appreciated.
Hi All, just wondering if anyone has a search that shows which user deleted another user in Linux  ? Typically in the linux syslog messages, when we check for userdel messages ,  it only shows the... See more...
Hi All, just wondering if anyone has a search that shows which user deleted another user in Linux  ? Typically in the linux syslog messages, when we check for userdel messages ,  it only shows the name of the user account that was deleted.  There isn't any mention of which user performed this action.   Whereas in Windows events, we see both src and target user for deletion Event IDs.  How to get this info ? I know one can manually login to host and verify the ./bash_history but how do you accomplish this from Splunk itself ?
My splunk index have a trouble  Total License Usage 90% Near Daily Quota how to solve it pls ?  
Hi, I am working on a POC project. I need to add timestmap command to use predict algorithm. My dataset however has the below fields. MonthYear Year Quarter Month Subscription ID ... See more...
Hi, I am working on a POC project. I need to add timestmap command to use predict algorithm. My dataset however has the below fields. MonthYear Year Quarter Month Subscription ID Subscription name   I was looking for a solution to randomly add timestamps for a period of one year. I have data from March2023 till May2023.   March 2023   Qtr 1 March 020b3b0c-5b0a-41a1-8cd7-90cbd63e06 SUB-PRD-EU-EDLCONTACTS March 2023   Qtr 1 March 020b3b0c-5b0a-41a1-8cd7-90cbd63e06 SUB-PRD-EU-EDLCONTACTS  
Hi, I am trying to look up data related to EventCode="4662", but it does not show in Splunk. Additionally I checked inputs.conf on the indexer and it was not present, I copied inputs.conf from defa... See more...
Hi, I am trying to look up data related to EventCode="4662", but it does not show in Splunk. Additionally I checked inputs.conf on the indexer and it was not present, I copied inputs.conf from default: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:\s+(?!groupPolicyContainer)" index = wineventlog renderXml=false I have check within Windows Event Viewer on our Domain Controller that Event 4662 is present, but Splunk searches for EventCode=4662 produce no results. so what i want to see is the event code 4662 that in it's message contain Object Type: user Here i will provide the event viewer logs that i want splunk to show An operation was performed on an object. Subject :   Security ID:       CIMBNIAGA\YT91504X   Account Name:      YT91504X   Account Domain:    CIMBNIAGA   Logon ID:          0xC2D9E1AC Object:   Object Server: DS   Object Type:   user   Object Name:   CN=ADJOINADMIN,OU=Functional   ID,OU=Special_OU,DC=cimbniaga,DC=co,DC=id   Handle ID:     0x0 Operation:   Operation Type:  Object Access   Accesses:      READ_CONTROL   Access Mask:   0x20000   Properties:    READ_CONTROL {bf967aba-0de6-11d0-a285-00aa003049e2} Additional Information:   Parameter 1:    -   Parameter 2:   Please help me i really got stuck i already try to delete the blacklist filtering but it's still not give me  the log that i want just like in the top @kheo_splunk
I had data like this in Splunk. DT=2023-09-13T23:59:56.029-0500|LogId=WFTxLog|AppId=SWBS|AppInst=server1:/apps/comp/swbs/instances/99912|TId=executorWithCallerRunsPolicy-30|ProcId=dc47cf25-2318-4f61... See more...
I had data like this in Splunk. DT=2023-09-13T23:59:56.029-0500|LogId=WFTxLog|AppId=SWBS|AppInst=server1:/apps/comp/swbs/instances/99912|TId=executorWithCallerRunsPolicy-30|ProcId=dc47cf25-2318-4f61-bd33-10f8928916ce|RefMsgNm=creditDecisionInformation2011_02|MsgId=fc1935b6-06c0-42bb-89d1-caf1076fff68|SrcApp=XX|ViewPrefs=CUSTOMER_202108-customerDetailIndicator,customerRelationshipDailyUpdateIndicator,customerRelationshipMonthlyUpdateIndicator,customerRelationshipSummaryIndicator,customerRiskSummaryIndicator~ACCOUNT_201505-accountOverLimitIndicator,creditChargeoffIndicator,creditDelinquencyIndicator,creditLineDecreaseIndicator,creditLineRestrictionIndicator,creditProductNSFIndicator,depositClosedForCauseIndicator,depositHardHoldIndicator,forcedClosedIndicator,nonExpansionQueueIndicator,outOfBoundIndicator,overdraftOnDepositIndicator,returnedDepositItemsIndicator,disasterReliefIndicator~APPLICANT_201111-applicationDetailIndicator,incomeDetailIndicator~MULTIPLE_ACCOUNT_CUSTOMER-riskRecommendationIndicator~CUSTOMER_CLI_201705-customerCLIDataIndicator|TxAct=Response Received|DestId=EIW|MsgNm=customerRltnshipMonthlySummaryTaxId|Elapsed=199|AppBody=SOR_RESP| I had a query like this. index=wf_pvsi_other wf_id=swbs wf_env=prod sourcetype="wf:swbs:profiling:txt" AppBody=SOR_RESP DestId=EIW | table SrcApp, SubApp, RefMsgNm, DestId, MsgNm | fillnull value=NA SubApp | top SrcApp, SubApp, RefMsgNm, DestId, MsgNm limit=100 | rename SrcApp as Channel, SubApp as "Sub Application", RefMsgNm as Message, DestId as SOR, MsgNm as "SOR Message" | fields Channel, "Sub Application", Message,SOR,"SOR Message",count | sort Channel,"Sub Application", Message,SOR, "SOR Message", count Now, my app moved to cloud and the log data is embedded within JSON and event show up in Splunk like this. Application specific log data is present as a value of JSON field "msg". {"cf_app_id":"75390614-95dc-474e-ad63-2358769b0641","cf_app_name":"CA00000-app-uat","cf_org_id":"7cc80b1c-0453-4487-ba19-4e3ffc868cf3","cf_org_name":"CA00000-test","cf_space_id":"dd35773f-63cb-4ed3-8199-2ae2ff1331f8","cf_space_name":"uat","deployment":"cf-ab9ba4f5a0f082dfc130","event_type":"LogMessage","ip":"00.00.000.00","job":"diego_cell","job_index":"77731dca-f4e8-4079-a97d-b65f20911c53","message_type":"OUT","msg":"DT=2023-09-14T21:41:52.638-0500|LogId=WFTxLog|AppId=SWBS|AppInst=ff664658-a378-42f4-4a4d-2330:/home/vcap/app|TId=executorWithCallerRunsPolicy-7|ProcId=92d42ef2-7940-48e8-b4df-1f92790b657e|RefMsgNm=creditDecisionInformation2011_02|MsgId=8f5a46f8-7288-442f-9f56-ecaa05b345af|SrcApp=XX|ViewPrefs=CUSTOMER_201605-customerDetailIndicator,customerRiskSummaryIndicator~ACCOUNT_201505-accountOverLimitIndicator,creditChargeoffIndicator,creditDelinquencyIndicator,creditLineDecreaseIndicator,creditLineRestrictionIndicator,creditProductNSFIndicator,depositClosedForCauseIndicator,depositHardHoldIndicator,forcedClosedIndicator,foreclosureIndicator,loanModifcationPrgrmIndicator,nonExpansionQueueIndicator,outOfBoundIndicator,overdraftOnDepositIndicator,returnedDepositItemsIndicator,SSDILPrgrmIndicator|TxAct=Response Received|DestId=EIW|MsgNm=customerRiskSummaryTaxId|Elapsed=259|AppBody=SOR_RESP|","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","tags":{"instance_id":"0","process_instance_id":"ff664658-a378-42f4-4a4d-2330","space_id":"dd35773f-63cb-4ed3-8199-2ae2ff1331f8","space_name":"uat"},"timestamp":1694745712641480200} I want Splunk query output to present like earlier. How to do this? Any suggestions?  
Is it possible to have WarmData stored partially on local indexers' storage and partially on remote storage? My total retention period is 90 days but I would like to keep data for the first 45 days... See more...
Is it possible to have WarmData stored partially on local indexers' storage and partially on remote storage? My total retention period is 90 days but I would like to keep data for the first 45 days on local indexers as warm buckets and then send the warm buckets to AWS S3 bucket to store for another 45 days. I am using the below settings and the data sent to the S3 afrer 90 days and is stored in frozen instead of warm or cold. How can I set warm or cold data in S3? [default] remotePath = volume:remote_store/$_index_name repFactor = auto frozenTimePeriodInSecs = 7776000 If I change the frozenTimePeriodInSecs = 1555200 (45 days) the data will be sent to S3 after 45 days but it will be sent in frozen buckets. I need to send data either as warm or cold buckets. If I setup  [volume:local_store] maxVolumeDataSizeMB = 1000000 That will send data to S3 only after filling 1 TB in local storage.  How can I maintain a time based storage with 45 days on local storage and 45 days on remote storage?
Good day, I have this SPL: index=test_7d sourcetype="Ibm:BigFix:CVE" earliest=-1d | search FixletSourceSeverityTxt="Critical" OR FixletSourceSeverityTxt="Urgent" | eval ReleaseDtm=strptim... See more...
Good day, I have this SPL: index=test_7d sourcetype="Ibm:BigFix:CVE" earliest=-1d | search FixletSourceSeverityTxt="Critical" OR FixletSourceSeverityTxt="Urgent" | eval ReleaseDtm=strptime(FixletSourceReleaseDtm, "%Y-%m-%d") | where ReleaseDtm>1672531199 ```Epoch time for 12/31/22 23:59:59, this gets only 2023 fixlets``` | eval _time = ReleaseDtm | makemv FixletCVETxt | eval cvecount=mvcount(FixletCVETxt) | eval cvetotal=cvecount * RemediatedComputerCount | timechart span=1mon sum(cvetotal) as cvetotal sum(RemediatedComputerCount) as RemediatedComputerCount by ComputerDatabaseId     That has the output of:   _time RemediatedComputerCount: 0 RemediatedComputerCount: 1 cvetotal: 0 cvetotal: 1 2023-01 113575 29512 7177373 1619396 2023-02 230629 42239 4779801 1063153 2023-03 253770 55769 7246548 1792831 2023-04 139419 36156 8175076 2192272 2023-05 144117 40335 2781698 740369 2023-06 314684 81621 5574141 1420760 2023-07 163672 47112 13643474 3957452 2023-08 281891 65004 6481422 1587176 2023-09 4596 6508 62384 98312   I would like to add the 'RemediatedComputerCount: #' columns to a total column by month next to the existing columns.  Then add another column with totals for 'cvetotal: #'.  I don't want totals of each column for a grand total at the bottom, just the total of the like columns in a new one by _time. Suggestions?  Thanks!
Hello, Does Tenable not send remediated vulnerabilities to Splunk after it has reported it once? The situation is as follows: A Host ABC had it's CVE-1234-5678 patched in April 2023, for which th... See more...
Hello, Does Tenable not send remediated vulnerabilities to Splunk after it has reported it once? The situation is as follows: A Host ABC had it's CVE-1234-5678 patched in April 2023, for which there is a record in the index. But after that there is not a single time that the Remediated vulnerability has been reported. It only reports on the open ones from there on. I tried enabling the "Historical reporting of remediated vulnerabilities" - but that still isn't helping. As a result, we consider that host to have the vulnerability as "Open". Is this the expected behaviour? I thought this setting would report the remediated vulnerabilities each time the scan runs?
Hi, I'm trying to put together some search queries for some common anomaly detection. I've been trying to find ones for these issues and I seem to come up with nothing.    Some common ones thoug... See more...
Hi, I'm trying to put together some search queries for some common anomaly detection. I've been trying to find ones for these issues and I seem to come up with nothing.    Some common ones though would be: Drastic change in events per second Drastic change in size of events Blocked outputs Change of health of inputs? Dropped events CPU percentage use that is REALLY high (percentage?)    
I'm working with a custom TA, AlertAction_SFTP, that has the following .conf.spec file.   [my_sftp_alert_action] param.sftp_server = <string> param.sftp_user = <string> param.sftp_rfile = <string> ... See more...
I'm working with a custom TA, AlertAction_SFTP, that has the following .conf.spec file.   [my_sftp_alert_action] param.sftp_server = <string> param.sftp_user = <string> param.sftp_rfile = <string> param.sftp_key = <string> param.ssh_key_dir = <string> param.sftp_password = <string>   When I try to use $date$ in the file name, filename-$date$, I get "Remote path is invalid."  I've tried multiple ways of doing this including adding date to my search   index=vuln sourcetype="qualys:hostDetection" signature="SMB Version 1 Enabled" TAGS="*Server*" earliest=-1d@d latest=@d | eval date=strftime(relative_time(now(), "-1d@d"), "%Y-%m-%d") | table date, *   I've tried $results.date$, $date$, and a couple of other things.  Is there some reason that the rfile path must not use a Spunk variable? TIA Joe
Do you happen to know if Cisco Meraki syslog, especially Flows and URLs have bytes in and bytes out? We're logging Meraki and there's no field whatsoever for bytes. Is it something that can be config... See more...
Do you happen to know if Cisco Meraki syslog, especially Flows and URLs have bytes in and bytes out? We're logging Meraki and there's no field whatsoever for bytes. Is it something that can be configured from Meraki logger console? Or the actual solution itself don't record that?
Hello -  I am trying to script the installation for the Mac Splunk Universal Forwarder package.  The package is a disk image (.dmg). I understand that we can mount the image using hidutil and a... See more...
Hello -  I am trying to script the installation for the Mac Splunk Universal Forwarder package.  The package is a disk image (.dmg). I understand that we can mount the image using hidutil and access the volume to find the .pkg file.  The issue comes from where we attempt to run installer -pkg volume/splunkuf.pgk -target /Applications/SplunkUf/ the end user is prompted to answer dialog boxes, which we do not want to occur.   Is there a switch to use to install the pkg file silently? TIA JH
Hello, I am trying to find the dates  when the host stopped sending logs to splunk in the last 6 months.I have used the below search but can only find the earliest and latest indexed time. Just ... See more...
Hello, I am trying to find the dates  when the host stopped sending logs to splunk in the last 6 months.I have used the below search but can only find the earliest and latest indexed time. Just wanted to know the dates as well when the host stopped sending logs. | tstats count as totalcount earliest(_time) as firstTime latest(_time) as lastTime where index=linux host=xyz by host | fieldformat firstTime=strftime(firstTime,"%Y-%m-%d %H:%M:%S") | fieldformat lastTime=strftime(lastTime,"%Y-%m-%d %H:%M:%S") Thanks
Hi,  Is there any specific fields in the vpn logs we can exclude, which are not needed for investigation so that we can save the license cost ??