All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How can I cut some parts of my message prior to index time? I tried to use both SEDCMD and transform on raw messages but I still get the full content each time. Here is my current props configurati... See more...
How can I cut some parts of my message prior to index time? I tried to use both SEDCMD and transform on raw messages but I still get the full content each time. Here is my current props configuration: [ETW_SILK_JSON] description = silk etw LINE_BREAKER = ([\r\n]+"event":) SHOULD_LINEMERGE = false CHARSET = UTF-8 TRUNCATE = 0 # TRANSFORMS-cleanjson = strip_event_prefix SEDCMD-strip_event = s/^"event":\{\s*// And my message sample: "event":{{"ProviderGuid":"7dd42a49-5329-4832-8dfd-43d979153a88","YaraMatch":[],"ProviderName":"Microsoft-Windows-Kernel-Network","EventName":"KERNEL_NETWORK_TASK_TCPIP/Datareceived.","Opcode":11,"OpcodeName":"Datareceived.","TimeStamp":"2024-07-22T14:29:27.6882177+03:00","ThreadID":10008,"ProcessID":1224,"ProcessName":"svchost","PointerSize":8,"EventDataLength":28,"XmlEventData":{"FormattedMessage":"TCPv4: 43 bytes received from 1,721,149,632:15,629 to -23,680,832:14,326. ","connid":"0","sport":"15,629","_PID":"820","seqnum":"0","MSec":"339.9806","saddr":"1,721,149,632","size":"43","PID":"1224","dport":"14,326","TID":"10008","ProviderName":"Microsoft-Windows-Kernel-Network","PName":"","EventName":"KERNEL_NETWORK_TASK_TCPIP/Datareceived.","daddr":"-23,680,832"}}} I want to get rid of the "event" prefix but none of the optios seems to work.
Hey Splunkers, is there any way to use "Realname" and "Mail" within ProxySSO setup? We are using ProxySSO for authentication and authorization. I figured out that this configuration on authorizati... See more...
Hey Splunkers, is there any way to use "Realname" and "Mail" within ProxySSO setup? We are using ProxySSO for authentication and authorization. I figured out that this configuration on authorization.conf works and the user is showing up correctly:   [userToRoleMap_ProxySSO] myuser = myrole-1;myrole-2::test::mymail@test.com     Unfortunately I didn't find any way to populate this information from the ProxySSO information like i did for RemoteGroup and RemoteUser.   Kind Regards
Anyone have an idea how to do this? For testing purposes of course.
5 columns and 79 rows
I'd like to know what are the usecases applied on splunk enterprise
How large is your csv?
Hello. Thank you for all your help and support. In a registered lookup table file (CSV), if I want to search and match the value of a specific field from two columns (two columns), how should I set... See more...
Hello. Thank you for all your help and support. In a registered lookup table file (CSV), if I want to search and match the value of a specific field from two columns (two columns), how should I set the input fields in the automatic lookup setup screen? For example, I have the following columns in my table file PC_Name,MacAddress1,MacAddress2 The MacAddress in the Splunk index log resides in either MacAddress1 or MacAddress2 in the table file. Therefore, we want to search both columns and return the PC_Name of the matching record. As a test, I tried to set the following two input fields to be searched automatically from the Lookup settings screen of the GUI, but PC_Name did not appear in the search result field. *See attached image. *If the following input field setting is one, PC_Name is output. MACAddr1 = Mac address MACAddr2 = Mac address So, as a workaround, I split the lookup settings into two and set each as follows MACAddr1 = MacAddress and MACAddr2 = MacAddress in the input fields to display the search results. However, this is not smart. Note that the lookup is configured from the Splunk Web UI. What is the best way to configure this?
I have a dashboard which gives the below error at user end but when i open the dashboard i dont see any error at my end and it perfectly runs fine with the proper result Error in 'lookup' comma... See more...
I have a dashboard which gives the below error at user end but when i open the dashboard i dont see any error at my end and it perfectly runs fine with the proper result Error in 'lookup' command: Could not construct lookup 'EventCodes, EventCode, LogName, OUTPUTNEW, desc'. See search.log for more details. Eventtype 'msad-rep-errors' does not exist or is disabled. Please help me how to fix this issue.  
Hi Manall,   I want to use NAS as read storage i.e as cold not hot. BTW its work fine with me till now
Thank you for your response and the suggestion about a custom modular input.  We will look in to that further.
Hi @Player01 , Are all your searches slow or only this one? how many logs have you in your index and in your lookup? which storage have? is it compliant with the Splunk requirement of at least 800... See more...
Hi @Player01 , Are all your searches slow or only this one? how many logs have you in your index and in your lookup? which storage have? is it compliant with the Splunk requirement of at least 800 IOPS? Ciao. Giuseppe
Hi @atr , as @deepakc said, check and disable the local firewall. Then see at /opt/splunk/var/log/splunk/first_install.log if something is described. If you continue to have issues, open a case to... See more...
Hi @atr , as @deepakc said, check and disable the local firewall. Then see at /opt/splunk/var/log/splunk/first_install.log if something is described. If you continue to have issues, open a case to Splunk Support, But I'm confident that the issue is the local firewall. Ciao. Giuseppe
@ITWhisperer I tried to roll buckets and it's fine but the error returns after a short time    this is what i meant   
yes  it is a values for id in my events
 Start by checking that you have port 8000 available - check firewall ports  The SSH issue is again most likely related to ports/access 22 (but this is not a splunk issue) Speak to an Linux OS ... See more...
 Start by checking that you have port 8000 available - check firewall ports  The SSH issue is again most likely related to ports/access 22 (but this is not a splunk issue) Speak to an Linux OS admin, as your issue's seem to be OS config related.  
The problem is actually deeper because appendcols works only if the lookup and index search has the same number of rows (and sort order).  In this use case, that's opposite to the premise.  I will ha... See more...
The problem is actually deeper because appendcols works only if the lookup and index search has the same number of rows (and sort order).  In this use case, that's opposite to the premise.  I will have to look deeper - but there should be something - it could be even more cumbersome.
If the lookup is of any size and if the number of events from the index search is large, yes, this will take vary long because it is effectively performing | search url = "foo.com" OR url = "bar.com... See more...
If the lookup is of any size and if the number of events from the index search is large, yes, this will take vary long because it is effectively performing | search url = "foo.com" OR url = "bar.com" OR url = "barz" OR ... Moving the subsearch to index search can save some time because you won't be streaming as many events: index=myindex sourcetype=mysource [|inputlookup LCL_url.csv | fields url] | stats count by url | fields - count | sort url However, using a lookup file as subsearch may not be the best option to begin with.
This has nothing to do with now().  Your strptime receives a literal string "First Detected" and tries to calculate time.  The result is null, of course. Change double quote to single quote.   ind... See more...
This has nothing to do with now().  Your strptime receives a literal string "First Detected" and tries to calculate time.  The result is null, of course. Change double quote to single quote.   index=stuff source=file Severity="Critical" | lookup detail.csv "IP Address" OUTPUTNEW Manager | eval First_DiscoveredTS = strptime('First Discovered', "%b %d, %Y %H:%M:%S %Z"), Last_ObservedTS = strptime('Last Observed', "%b %d, %Y %H:%M:%S %Z"), firstNowDiff = (now() - First_DiscoveredTS)/86400, Days = floor(firstNowDiff) | stats count by Manager Days | where Days > 30   Play with this emulation and compare with real data   | makeresults format=csv data="First Discovered, Last Observed \"Jul 26, 2023 16:50:26 UTC\", \"Jul 19, 2024 09:06:32 UTC\"" | appendcols [makeresults format=csv data="Manager foo"] ``` the above emulates index=stuff source=file Severity="Critical" | lookup detail.csv "IP Address" OUTPUTNEW Manager ```   Output from this is Manager Days count foo 362 1
Like this?   | rex "message: \((?<in_parentheseses>[^\)]+)"   You can test with   | makeresults format=csv data="_raw message: (c4328dd3-d16e-4df8-a8e6-b2ebcab9d8bc)" ``` data emulation above `... See more...
Like this?   | rex "message: \((?<in_parentheseses>[^\)]+)"   You can test with   | makeresults format=csv data="_raw message: (c4328dd3-d16e-4df8-a8e6-b2ebcab9d8bc)" ``` data emulation above ``` | rex "message: \((?<in_parentheses>[^\)]+)"   _raw in_parentheses message: (c4328dd3-d16e-4df8-a8e6-b2ebcab9d8bc) c4328dd3-d16e-4df8-a8e6-b2ebcab9d8bc
Hi, I can see Splunk is vulnerable to openssl 1.0.2zk, I've applied the latest 9.2.2 on Splunk Enterprise and the Universal Forwarder, still running the older 1.0.2zj version. Any ideas when this w... See more...
Hi, I can see Splunk is vulnerable to openssl 1.0.2zk, I've applied the latest 9.2.2 on Splunk Enterprise and the Universal Forwarder, still running the older 1.0.2zj version. Any ideas when this will be remediated? OpenSSL Bulletin on 26 June [ Vulnerabilities ] - /news/vulnerabilities-1.0.2.html (openssl.org) From Splunk Advisory, latest openssl related update was in March for zj version.