All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

First things first. What does your raw data looks like? The sample you pasted - is it one event or are these multiple events? Where from and how are you getting this data? Because it looks as if it w... See more...
First things first. What does your raw data looks like? The sample you pasted - is it one event or are these multiple events? Where from and how are you getting this data? Because it looks as if it was XML horribly butchered by spliting into single lines and sending each line separately. And that's first thing that should be fixed instead of trying to do walkarounds in search time.
Try it this way around | spath output=RAM ResourceInfo.RAM | rex field=RAM max_match=0 "\"(?<tmp>[^\"]+\":[\d\.]+)" | mvexpand tmp | rex field=tmp "(?<component>[^\"]+)\":(?<Value>[\d\.]+)" | table... See more...
Try it this way around | spath output=RAM ResourceInfo.RAM | rex field=RAM max_match=0 "\"(?<tmp>[^\"]+\":[\d\.]+)" | mvexpand tmp | rex field=tmp "(?<component>[^\"]+)\":(?<Value>[\d\.]+)" | table component Value
Hi @tomjb94 , yes obviously, yu have to extract the fields using regexes. I can help you, with the following regex that extract all the values but the orderCode, that I don't know with part of the ... See more...
Hi @tomjb94 , yes obviously, yu have to extract the fields using regexes. I can help you, with the following regex that extract all the values but the orderCode, that I don't know with part of the logs is, if you want my help about this, please, highlight this value in your logs using bold.. Anyway, you can use a search like the following (except orderCode):   index=test | rex "^\[2024-09-10 07:27:46\.424 \(TID:(?<merchantCode>\d+).*\<subState\>(?<subState>\w+).*\<subCountryCode\>(?<subCountryCode>\d+)" | search merchantCode=MERCHANTCODE1 subCountryCode=* subState=* | stats count by merchantCode subCountryCode subState   You can test the regex at https://regex101.com/r/KZMUxp/1  Then it isn't so clear for me if you need also the other fields (SubState, SubCountryCode, SubCity, PFID, SubName, SubID, SubPostalCode, SubTaxID). If yes, you have to extract all of them, if you want my help, please indicate the part of log of each of them. Ciao. Giuseppe
Hi, I am currently working on an nginx plus as ingress controller for my kubernetes and using sc4s to forward logs to splunk enterprise. However I notice that sc4s does not forward all of logs includ... See more...
Hi, I am currently working on an nginx plus as ingress controller for my kubernetes and using sc4s to forward logs to splunk enterprise. However I notice that sc4s does not forward all of logs include the approtect WAF and DoS. Does the WAF and DoS require special setup to forward logs? I tried with syslog-ng https://github.com/nginxinc/kubernetes-ingress/blob/v3.6.2/examples/ingress-resources/app-protect-dos/README.md like this example but the logs is not showing on splunk enterprise. Thanks.
Hi All -  I need help with a fairly complex search i am being asked to build by a user. The ask is that the below fields are extracted from this XML sample: [2024-09-10 07:27:46.424 (TID:14567... See more...
Hi All -  I need help with a fairly complex search i am being asked to build by a user. The ask is that the below fields are extracted from this XML sample: [2024-09-10 07:27:46.424 (TID:14567876)] <subMerchantData> [2024-09-10 07:27:46.424 (TID:dad4d2e725854048)] <pfId>499072</pfId> [2024-09-10 07:27:46.424 (TID:145767627)] <subName>testname</subName> [2024-09-10 07:27:46.424 (TID:dad4d2e725854048)] <subId>123456</subId> [2024-09-10 07:27:46.424 (TID:145767627)] <subStreet>1 TEST LANE</subStreet> [2024-09-10 07:27:46.424 (TID:145767627)] <subCity>HongKong</subCity> [2024-09-10 07:27:46.424 (TID:145767627)] <subState>HK</subState> [2024-09-10 07:27:46.424 (TID:dad4d2e725854048)] <subCountryCode>344</subCountryCode> [2024-09-10 07:27:46.424 (TID:dad4d2e725854048)] <subPostalCode>1556677</subPostalCode> [2024-09-10 07:27:46.424 (TID:dad4d2e725854048)] <subTaxId>-15566777</subTaxId> [2024-09-10 07:27:46.424 (TID:14567876)] </subMerchantData> This search doesn't pull anything back, i believe because they are not extracted fields index=test merchantCode=MERCHANTCODE1 subCountryCode=* subState=* orderCode=* | stats count by merchantCode subCountryCode subState orderCode In addition to these fields SubState, SubCountryCode, SubCity, PFID, SubName, SubID, SubPostalCode, SubTaxID However i'm not sure how this can be fulfilled, could anyone support with writing a search that would allow me to extract this info within a stats count? Thanks, Tom
Hi splunkers ! I m facing an issue that is going to make me crazy ! I've got to set the timestamp in the following logs (timestamp field is the 11th field, the first one being the insert time by the ... See more...
Hi splunkers ! I m facing an issue that is going to make me crazy ! I've got to set the timestamp in the following logs (timestamp field is the 11th field, the first one being the insert time by the proxy himself)  : 2024-09-16T13:12:54+02:00 Logging-Client  "-1","username","1.2.3.4","POST","872","2211","www.facebook.com","/csp/reporting/","OBSERVED","","1726484997","2024-09-16 11:09:57","https","Social Networking","application/x-empty","","Minimal Risk","Remove 'X-Forwarded-For' Header","200","10.97.5.240","","","Firefox","102.0","Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0","firefox.exe","1.2.3.4","443","US","","t","t","t","f","f","computerName","","1.2.3.4","1.2.3.4","8080" So, I'm using a regex to extract fields and set the real timestamp in my props.conf :  [mySourcetype] SHOULD_LINEMERGE = false EXTRACT-mySourcetype = ^[^,\n]*,"(?P\w+)","(?P[^"]+)","(?P\w+)","(?P[^"]+)[^,\n]*,"(?P[^"]+)[^,\n]*,"(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P(?=\s*)|[^"]+)","(?P[^"]+)","(?P[^"]+)","(?P[^"]+)"$ TIME_PREFIX = (?:[^,]+,){11} TIME_FORMAT = %Y-%m-%d %H:%M:%S     Then, Ive got different results based on different source: Upload a file directly in the search head                      Extraction    Ok         Timestamp    OK File red from an universal forwarder                       Extraction     OK           Timestamp  Failed   The is NO heavy forwarder between the UF and the indexers. The props.conf is deployed only on the SearchHeads. So, Something is tricky here !   If someone got an idea, I will apreciate ! Cheers.
Hi @Ram2 ...May i know what happens when you try this props: SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TIME_PREFIX=^  
Table SPL: | advhunt cred=all renew=True query="DeviceProcessEvents | where Timestamp > ago(30d) | where FileName has 'file.exe' | project DeviceName, FileName, ProcessCommandLine, FolderPath, Accou... See more...
Table SPL: | advhunt cred=all renew=True query="DeviceProcessEvents | where Timestamp > ago(30d) | where FileName has 'file.exe' | project DeviceName, FileName, ProcessCommandLine, FolderPath, AccountName" | spath input=_raw | stats count by AccountName,DeviceName | sort -count Source Code of Panel: { "type": "splunk.table", "options": { "count": 100, "dataOverlayMode": "none", "drilldown": "none", "showRowNumbers": false, "showInternalFields": false }, "dataSources": { "primary": "ds_xxxxx" }, "title": "File.exe (Last 30 Days)", "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "query": "| inputlookup lookuptable where field1=$row.user.value$\n| table field1, field2", "earliest": "auto", "latest": "auto", "type": "custom", "newTab": true } } ], "context": {}, "showProgressBar": false, "showLastUpdated": false } SPL for search on click: | inputlookup lookuptable where field1=$row.user.value$ | table field1, field2
Hello there, im creating a visualization using dashboard studio and showing some field with single value. But the data font size displayed is dynamic like the capture.   How to make the data fo... See more...
Hello there, im creating a visualization using dashboard studio and showing some field with single value. But the data font size displayed is dynamic like the capture.   How to make the data fontsize is static in a certain size?. I already add fontSize value but there is no changes: "options": {         "fontSize": 24     }
Hi @Somesh , yes, it's the same. Only one attention point: configure on your Search Heads the search affinity. This is relevant to have more performant searcheas and avoid that a Searc Head uses t... See more...
Hi @Somesh , yes, it's the same. Only one attention point: configure on your Search Heads the search affinity. This is relevant to have more performant searcheas and avoid that a Searc Head uses the other site Indexers, but mainly because otherwise a Search Head, when the primary site is down, continue to search also on the Site1 Indexers so it doesn't find a part of data. I encountered this issue during an acceptance test! Ciao. Giuseppe
    ResourceInfo: { ID: "58", User: "abc", NVM: { a: "522523632", b: "80000000", c: "442523632", d: "14", . . }, RAM: { [+] ... See more...
    ResourceInfo: { ID: "58", User: "abc", NVM: { a: "522523632", b: "80000000", c: "442523632", d: "14", . . }, RAM: { [+] }, ROM: { [+] } }       and for RAM ROM and NVM i want to get the specific data inside them. component   Value a                      522523632 b                      80000000 c                      442523632 d                        14         . . . I want to form a table like this for RAM ROM and NVM. And i do it like this. But sometimes i get a error message like field tmp does not exist, even there is data. So i want to avoid mvzip and get this data in some other way. is there way present to deal with JSON data?     | spath output=RAM ResourceInfo.RAM | rex field=RAM max_match=0 "\"(?<component>[^\"]+)\":(?<Value>[\d\.]+)" | eval tmp = mvzip(component,Value) | mvexpand tmp | eval component=mvindex(split(tmp,","),0) | eval Value=mvindex(split(tmp,","),1) |table component Value    
With Splunk rex you need to double up on backslashes when matching backslashes in the string - try something like this | rex field=raw_msg max_match=0 "(?<=\(|]\\\\;)(?<post>[^:]+):status:(?<status>... See more...
With Splunk rex you need to double up on backslashes when matching backslashes in the string - try something like this | rex field=raw_msg max_match=0 "(?<=\(|]\\\\;)(?<post>[^:]+):status:(?<status>[^:]*):pass_condition\[(?<passed_condition>[^\]]*)\]:fail_condition\[(?<failed_condition>[^\]]*)\]:skip_condition\[(?<skipped_condition>[^\]]*)\]" Having said that, you might want to consider extracting each group of fields as a whole and use mvexpand before separating into post, status, etc. as the multivalue fields you currently have do not align as the null values are not inserted into the mv fields
Found a fix after going through the Microsoft Docs 1. https://learn.microsoft.com/en-gb/azure/storage/common/storage-private-endpoints?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage... See more...
Found a fix after going through the Microsoft Docs 1. https://learn.microsoft.com/en-gb/azure/storage/common/storage-private-endpoints?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json   Answer is hidden in this - dns-changes-for-private-endpoints https://learn.microsoft.com/en-gb/azure/storage/common/storage-private-endpoints?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json#dns-changes-for-private-endpoints Once you create a private end point for the storage account. You should be able to resolve the private ip with a DNS Name. For Example Storage Account Name - StorageAccountA Private IP - 10.1.1.2 Storage Account DNS Name would be: StorageAccountA.blob.core.windows.net Now the trick is where ever you are configuring the addon, let say from your onprem server - 1. you should be able to resolve the DNS name to the Private IP from that server. 2. you should have the connectivity to the Private IP on port 443. 3.  access keys to storage account Thats it, you can configure it on the addon and the connections will go through the private end point      
Okay my question was In case if we want to setup deployer for multisite cluster should we follow the same procedure like we did on the Single Cluster
We have a cluster with two search heads and two indexers. We need to install the Enterprise Security app on the search heads. The question arises regarding the summary index and indexes created durin... See more...
We have a cluster with two search heads and two indexers. We need to install the Enterprise Security app on the search heads. The question arises regarding the summary index and indexes created during the Enterprise Security installation, like IOC and notable. Should these indexes be created with the same names on our indexers?
What do you mean by "seems like mvzip command is depricated"? Are you getting an error message? How are you trying to use it? If you don't want to or can't use the mvzip command, a replacement would... See more...
What do you mean by "seems like mvzip command is depricated"? Are you getting an error message? How are you trying to use it? If you don't want to or can't use the mvzip command, a replacement would depend on what it is you are trying to do. Please can you expand on your usecase, with sample events, a description (in non-SPL terms) of what you are trying to achieve, and a representation of your desired output.
Hi PickleRick i was going from 9.0.1 to 9.3.0. cheers,  Dabbsy
Sorry! My bad!!  Manager & the SHC-Deployer each 1 machine.   So you suggest SHC-Deployer is not required for Mulitsite Cluster ?  
Hi @Somesh , I don't like that the Cluster Manager and the SHC-Deployer is on the same server, I'd prefer a dedicated Cluster Manager, but what's the issue? Both Indexers and Search Head Cluster co... See more...
Hi @Somesh , I don't like that the Cluster Manager and the SHC-Deployer is on the same server, I'd prefer a dedicated Cluster Manager, but what's the issue? Both Indexers and Search Head Cluster continue to work also without Cluster Manager and Deployer so your infrastructure continue to work also in case od unavailability of Site1. The real question should be: can my infrastructure manage the log volume and the searches? If yes, you don't have issues. Ciao. Giuseppe
Of course you're using inputs.conf. Without it you'd have no inputs. Question is what inputs you get your data from. Is it a simple tcp:// or udp:// input and you're receiving data directly on your i... See more...
Of course you're using inputs.conf. Without it you'd have no inputs. Question is what inputs you get your data from. Is it a simple tcp:// or udp:// input and you're receiving data directly on your indexer (which you shouldn't)? Is it an intermediate syslog daemon writing to files which are read by UF? Is it something else?