All Topics

Top

All Topics

Hi at all, I have to ingest logs from securelog and I'm able to take and parse linux logs, but I have an issue when parsing windows logs: how can I connect winlogbeat format to a Splunk_TA_Windows ... See more...
Hi at all, I have to ingest logs from securelog and I'm able to take and parse linux logs, but I have an issue when parsing windows logs: how can I connect winlogbeat format to a Splunk_TA_Windows to correctly parse events? in winlogbeat events format is different from the normal windows logs so te Splunk_TA_windows doesn't reach to parse logs. Is there a connector or must I manually parse winlogbeat logs to reduce them to Splunk_TA_windows logs? Thank you for your help. Ciao. Giuseppe
Hi All, I am trying to do a search to compare 2 different sources. Firstly, I created a lookup to catch some rules hitting my search. In the background, my alert is running and appending results to ... See more...
Hi All, I am trying to do a search to compare 2 different sources. Firstly, I created a lookup to catch some rules hitting my search. In the background, my alert is running and appending results to this csv lookup file. Lookup file has also a field which is called Explanation.  What I am trying is doing a search that provide me to update a row if anything is changed in raw data. However, there is an important point. If there is no change in raw data for the lookup field, the field in lookup file should not change and it should keep the explanation. If not, the row should be deleted. Thank you
Hi,  I have this method: public ActionResult MethodEZ(EZDokumentJson dokument) JsonResult: { "Data1": "", "Data2": null, "Data3": null, "Data4": null, "DokumentId": "dvsd-5dsafd-55555-1111-afd... See more...
Hi,  I have this method: public ActionResult MethodEZ(EZDokumentJson dokument) JsonResult: { "Data1": "", "Data2": null, "Data3": null, "Data4": null, "DokumentId": "dvsd-5dsafd-55555-1111-afdfas" } I would like to ask you for help with collecting data from JsonResult.  Here are my last attempts that don't work. My Data Collection look like this: ToString().Split(string/"DokumentId": ).[1].Split(string/,).[0]  toString().split("DokumentId").[1].split(\,).[0] Thanks....
Dear Team, I installed enterprise security on the search head and downloaded Splunk_TA_ForIndexer from ES General settings now i am stuck for UF technology add-on, from where i can find it? no op... See more...
Dear Team, I installed enterprise security on the search head and downloaded Splunk_TA_ForIndexer from ES General settings now i am stuck for UF technology add-on, from where i can find it? no option from the ES interface and i can't find it on splunkbase portal I tried multiple search keyword on splunkbase with no luck
Hi, I'm uncertain which Process name—CreatorProcessName, ParentProcessName, or NewProcessName—is the appropriate one to apply windows events blacklisting in this context. Thanks..
How to I eliminate partial user id characters coming out of a search query?   Here are examples of incomplete userIDs - whereupon they shouldnt appear at all:   The middle GSA line is the correct exa... See more...
How to I eliminate partial user id characters coming out of a search query?   Here are examples of incomplete userIDs - whereupon they shouldnt appear at all:   The middle GSA line is the correct example userID- the rest is garbage and I want to eliminate that 01022703 021216 07602381 "1206931120@GSA.GOV" 177 177670 1969412 232789
Hello All, I have a lookup file with multiple fields. I am reading it using inputlookup command and implementing some filters. Now  I need to apply regex on a field and extract the corresponding mat... See more...
Hello All, I have a lookup file with multiple fields. I am reading it using inputlookup command and implementing some filters. Now  I need to apply regex on a field and extract the corresponding matched string from each row of the lookup into a separate field. The regex is: xxx[\_\w]+:([a-z_]+) Thus, I need your guidance and inputs to build the same. Thank you Taruchit  
I want to filter the palo logs at the forwarder level by looking at the packet before indexing (licensing) based certain condition like... zone, firewall name (enterprise) etc The logs come to both ... See more...
I want to filter the palo logs at the forwarder level by looking at the packet before indexing (licensing) based certain condition like... zone, firewall name (enterprise) etc The logs come to both our UF & HF, what is the best way to achieve it. Was looking into a few docs suggesting to apply ingest eval, is that feasible? Can anyone please help me with this.
Got a customer wanting the UFs to send data to the forcepoint DLP and then to the intermediate heavy forwarder. The reason is to have the dlp mask PII even though we could have splunk do that. Any re... See more...
Got a customer wanting the UFs to send data to the forcepoint DLP and then to the intermediate heavy forwarder. The reason is to have the dlp mask PII even though we could have splunk do that. Any reason that that this data flow wouldn't work? An admin that I have to go through for the UF installs and the DLP setup tells me that forcepoint can't read splunk data sent to it which I don't believe.  Apologies for possibly an easy question but any documentation to support the above architecture would be great. 
Hello  I am trying to add some logic/formatting to my list of failed authentications. Heres my search query. | tstats summariesonly=true count from datamodel="Authentication" WHERE Authenticatio... See more...
Hello  I am trying to add some logic/formatting to my list of failed authentications. Heres my search query. | tstats summariesonly=true count from datamodel="Authentication" WHERE Authentication.action="failure" AND Authentication.user="*" AND Authentication.src="*" AND Authentication.user!=*$ by Authentication.user | `drop_dm_object_name("Authentication")` | sort - count | head 10 I want to make it so that it counts how many consecutive days a user has been on this list, is that possible?
Hi All, Here is my how my event looks like -   20/11/2023 12:47:05 (01) >> AdyenProxy::AdyenPaymentResponse::ProcessPaymentFailure::Additional response -> Message : NotAllowed ; Refusal Reason : m... See more...
Hi All, Here is my how my event looks like -   20/11/2023 12:47:05 (01) >> AdyenProxy::AdyenPaymentResponse::ProcessPaymentFailure::Additional response -> Message : NotAllowed ; Refusal Reason : message=MessageHeader.POIID: NotAllowed Value: P400Plus-805598742, Reason: my POIID is P400Plus-805598450   I am trying to extract the part "POIID: NotAllowed Value: P400Plus-805598742, Reason: my POIID is P400Plus-805598450" I am using this regex - | rex field=_raw "MessageHeader.+(?<POIID_Error>)-*" But the field vale POIID_Error seems to be blank after running the query. Attaching the ss for reference. Little suggestion to fix this is appreciated.
Hi. We have an indexer cluster of 4 nodes with a little over 100 hundred indexes. We've recently taken a look and the cluster manager fixup tasks and noticed a large number of fixup tasks pending ... See more...
Hi. We have an indexer cluster of 4 nodes with a little over 100 hundred indexes. We've recently taken a look and the cluster manager fixup tasks and noticed a large number of fixup tasks pending over 100 days (24000) for a select few of the indexes. The majority of these tasks are for the following reasons. Received shutdown notification from peer and Cannot replicate as bucket hasn't rolled yet. For some reason these few indexes are quite low volume but have a large number of buckets.  ideally i would like to clear these tasks. If we aren't precious about the data would a suitable solution be to remove the indexes from the cluster configuration, manually delete the data folders for the indexes and re enable the indexes? Or could we reduce the data size on the index/number of buckets on the index to clear out these tasks? example of one of the index configurations # staging: 0.01 GB/day, 91 days hot, 304 days cold [staging] homePath = /splunkhot/staging/db coldPath = /splunkcold/staging/colddb thawedPath = /splunkcold/staging/thaweddb maxDataSize = 200 frozenTimePeriodInSecs = 34128000 maxHotBuckets = 1 maxWarmDBCount = 300 homePath.maxDataSizeMB = 400 coldPath.maxDataSizeMB = 1000 maxTotalDataSizeMB = 1400 Thanks for any advice.
Hello. Upgrading from Version 7 to Version 8.2.12, we noticed that the "ui-prefs.conf" is not working anymore. Inside the /etc/user/app/local/ui-prefs.conf we have every user customization, now th... See more...
Hello. Upgrading from Version 7 to Version 8.2.12, we noticed that the "ui-prefs.conf" is not working anymore. Inside the /etc/user/app/local/ui-prefs.conf we have every user customization, now they are totally skipped. Also the admin, can't change his default view type (ex. "fast/smart/verbose"). Is there a reason? And is there a way to restore this feature? Thanks.
Instead of getting search box on after splunk login, I want to make it look like some kind of welcome page like we get in splunk_secure_gateway app. I got the part of making navigation view but not a... See more...
Instead of getting search box on after splunk login, I want to make it look like some kind of welcome page like we get in splunk_secure_gateway app. I got the part of making navigation view but not able to modify the launch page to some welcome page of website. Please advise what changes I can make to some config file or xml file so i can get whatever page look i want.
Dear All, I have one index and I use this index to store messages and summary report as well. In report="report_b", it stores the running case name and the used device id(DEV_ID) in timestamp _time... See more...
Dear All, I have one index and I use this index to store messages and summary report as well. In report="report_b", it stores the running case name and the used device id(DEV_ID) in timestamp _time. ex. _time DEV_ID case_name case_action 01:00 111 ping111.py start 01:20 111 ping111.py end 02:00 222 ping222.py start 02:30 222 ping222.py end 02:40 111 ping222.py start 03:00 111 ping222.py end   For Message_Name="event_a",  it is stored in index=A as below: _time LOG_ID Message_Name 01:10 01 event_a 02:50 02 event_a I would like to associate the case that is running when the event_a is sent. So I use the code below: Firstly, to find out the device id(DEV_ID) associated with this log(LOG_ID)  Secondly, to associate event_a and case_name by DEV_ID Finally, list those event_a only.   (index=A Message_Name="event_a") OR (index=A report="report_b") | lookup table_A.csv LOG_ID OUTPUT DEV_ID | sort 0 + _time | streamstats current=false last(case_name) as last_case_name, , last(case_action) as last_case_action by DEV_ID | eval case_name=if(isnull(case_name) AND last_case_action="start",last_case_name,case_name) | where isnotnull(Message_Name) | table _time Message_Name LOG_ID DEV_ID case_name     The output would be: _time Message_Name LOG_ID DEV_ID case_name 01:10 event_a 01 111 ping111.py 02:50 event_a 02 111 ping222.py   The code works fine but the amount of data is huge so the lookup command takes a very long time.  Furthermore, actually, it is no need to apply lookup command for report="report_b". (index=A Message_Name="event_a") : 150000 records in 24 hour (index=A report="report_b") : 700000 records in 24 hour Is there any way to rewrite the code to make lookup only apply on events belongs to (index=A Message_Name="event_a") ? try to use subsearch, append, appendpipe to restrict find associated DEV_ID first but not working.   Thank you so much.
I have an Add-On which has defined a new data input. Via the UI, I can easily create new instances of the same input .. but I want to create them programmatically, via Python. The inputs.conf.spec fi... See more...
I have an Add-On which has defined a new data input. Via the UI, I can easily create new instances of the same input .. but I want to create them programmatically, via Python. The inputs.conf.spec file for the app contains: [strava_api://<name>] interval = index = access_code = start_time = reindex_data = These are the values I am prompted for when using the UI manually ... but, how do I achieve the same result via Python (using python3)? I have the Python SDK installed .. but I just dont know where to start and would appreciate some inspiration from the Community. Thanks in advance for any comments/tips/solutions (For those with a keen eye, the Add-On in question is Strava for Splunk )
i have installed the deployment server where configured required inputs.conf and outputs.conf for ingesting logs from UF to my indexer  on the deployment-app of deployment server, and configured the ... See more...
i have installed the deployment server where configured required inputs.conf and outputs.conf for ingesting logs from UF to my indexer  on the deployment-app of deployment server, and configured the UF forwarder to the deployment server, i found the new host details in the deployment server under the forward management tab. And pointed to appropriate classes and apps from the forward-management  configuration. Still I'm not able to get the logs on the indexer. when i see the error on the Splunkd log of uf. Find the below error   11-20-2023 17:25:40.602 +0400 ERROR X509Verify - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) failed validation; error=7, reason="certificate signature failure" 11-20-2023 17:25:40.602 +0400 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='decrypt error'. 11-20-2023 17:25:40.602 +0400 WARN HttpPubSubConnection - Unable to parse message from PubSubSvr: 11-20-2023 17:25:40.602 +0400 INFO HttpPubSubConnection - Could not obtain connection, will retry after=86.429 seconds. 11-20-2023 17:25:40.778 +0400 INFO WatchedFile - Will begin reading at offset=2295058 for file='/var/log/audit/audit.log'. 11-20-2023 17:25:46.129 +0400 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/anaconda/ks-script-lk6ot_yw.log'. 11-20-2023 17:25:46.130 +0400 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/anaconda/ks-script-wo9l091q.log'. 11-20-2023 17:25:49.856 +0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 11-20-2023 17:26:01.857 +0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 11-20-2023 17:26:07.910 +0400 INFO ScheduledViewsReaper - Scheduled views reaper run complete. Reaped count=0 scheduled views 11-20-2023 17:26:07.914 +0400 INFO TcpOutputProc - Removing quarantine from idx=192.168.1.5:9997 11-20-2023 17:26:07.914 +0400 INFO TcpOutputProc - Removing quarantine from idx=192.168.1.6:9997 11-20-2023 17:26:07.921 +0400 ERROR TcpOutputFd - Read error. Connection reset by peer 11-20-2023 17:26:07.928 +0400 ERROR TcpOutputFd - Read error. Connection reset by peer
hello Splunk team, As picture, I found UI duplication problem in selecting data type module. I tested different browsers and found this problem in all of them. I suspect it is caused by the wrong de... See more...
hello Splunk team, As picture, I found UI duplication problem in selecting data type module. I tested different browsers and found this problem in all of them. I suspect it is caused by the wrong design of Splunk UI, can you help solve this problem? Thank you!  
I have a singlevalue trellis view showing the status of items up (Green) and down (Red). When the status is down (Red), i would like to get the trellis view to flash or blink. I found below htm... See more...
I have a singlevalue trellis view showing the status of items up (Green) and down (Red). When the status is down (Red), i would like to get the trellis view to flash or blink. I found below html code for blinking effect <html> <style> @keyframes blink { 100%, 0% { opacity: 0.6; } 60% { opacity: 0.9; } } #singlevalue rect { animation: blink 0.8s infinite; } </style> </html> However, it will make all singlevalue are blinking. I am using rangemap to make the background of singlevalue in Red/Green color. How can I specify the trellis will blink in Red color only? Thanks a lot.
I'm trying to make SOC Use cases clear, concise, and easy to find later. It is possible to make a threat detection use case based on MITRE, but I guess SOC is not the only threat detection. There are... See more...
I'm trying to make SOC Use cases clear, concise, and easy to find later. It is possible to make a threat detection use case based on MITRE, but I guess SOC is not the only threat detection. There are many other requirements such as compliance and business use cases. What approach should be more effective and right? Here are my questions. Use Case Development: - Best practices for effective SOC use cases and recommended frameworks? Documentation and Knowledge Management: - Strategies/tools for organizing SOC use cases for searchability? Continuous Improvement: - Methods for improving and updating SOC use cases over time? - Can you share examples of how penetration testing results have influenced the development of SOC use cases? Risk Assessment Integration: - How do you align SOC use cases with risk levels identified in risk assessments? - Are there specific metrics or indicators from risk assessments that should be incorporated into SOC use cases? - What best practices do you suggest for regularly reviewing and updating SOC use cases based on changes in risk assessments?