All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Jean-Sébastien  You can use rex command. The rex command matches the value of the specified field against the unanchored regular expression and extracts the named groups into fields of the corres... See more...
@Jean-Sébastien  You can use rex command. The rex command matches the value of the specified field against the unanchored regular expression and extracts the named groups into fields of the corresponding names. https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Rex 
Hello @Jean-Sébastien  You can use regex. This will create a new field called output that contains the values running, drinking, and walking. Let me know if you need more assistance!    
@bowesmana  let me clarify you the exact issue. We are ingesting logs from syslogserver in real time manner (meaning that as and when the logs are getting generated at the device, immediately the spl... See more...
@bowesmana  let me clarify you the exact issue. We are ingesting logs from syslogserver in real time manner (meaning that as and when the logs are getting generated at the device, immediately the splunk forwarder is forwarding it to splunk for indexing. Now we are having threat intel in the form of .csv file containing multiple headers viz date, ip, valid_from, valid_until etc. we have ingested this csv file in lookup and it is accessible through searh and reporting. Our architecture is having one master/search-head and two indexers. We have configured deployment server on master and indexers (clients) and in sync with deployment server successfully. The deployment app has been created and is getting deployed on the clients also. The deployment app is aimed at enriching the logs with the threat intel in csv file. However, this enrichmet has to be done before the logs are getting indexed and any match of ip in the log event with the ip in csv should generate additional field "Add_field" which should also get indexed alongwith syslog logs. we have configured props.conf and transforms.conf in the deployment app, however exact configuration is not being achieved.  regarding your specific query about real time: when we say real time, it means that logs are getting enriched at the time of indexing and additional contenxtual information present in the threat intel is also getting indexed in additional fields. the query run on the logs therefore does not need any lookup to be incorporated in search query. the match of threat intel done today should stay in the logs in case the csv file is updated tomorrow.  looking forward for suitable solution / configurations to be done in props.conf and transforms.conf for index time enrichment (real time enrichment) and not search time enrichment. thanks and regards
Hello,  I have big and complete log and want to extract specific value.  Small part of log: "state":{"running":{"startedAt":"2024-12-19T13:58:14Z"}}}], I would like to extract running in this case... See more...
Hello,  I have big and complete log and want to extract specific value.  Small part of log: "state":{"running":{"startedAt":"2024-12-19T13:58:14Z"}}}], I would like to extract running in this case, value can be other .  Could you please help me ? 
Thank you so much for your help. I am pleased to share that I was able to resolve the initial issue by adjusting the PEM file. However, when I execute the command: openssl s_client -showcerts -conne... See more...
Thank you so much for your help. I am pleased to share that I was able to resolve the initial issue by adjusting the PEM file. However, when I execute the command: openssl s_client -showcerts -connect hostname:port I get a connected status, but it ultimately results in the following error: 80FB2563307F0000:error:0A000126:SSL routines:ssl3_read_n:unexpected eof while reading:../ssl/record/rec_layer_s3.c:317: Additionally, another error is displayed: Verification error: self-signed certificate in certificate chain Your help would be greatly appreciated.
I cloned HTTP traffic collection from Splunk Stream and created a new name as HTTP_test but no data is collected. However, data is currently being collected from Stream rules that collect HTTP data... See more...
I cloned HTTP traffic collection from Splunk Stream and created a new name as HTTP_test but no data is collected. However, data is currently being collected from Stream rules that collect HTTP data. Is there a reason why the same item is not collected even though it is cloned?
@inventsekar    Sorry for the late reply. My splunk enterprise version is 9.1.0.2.  
Hi everyone, I’m new to working with Citrix NetScaler and need assistance with integrating it into Splunk Enterprise. Could someone please guide me on: The prerequisites required for this integrat... See more...
Hi everyone, I’m new to working with Citrix NetScaler and need assistance with integrating it into Splunk Enterprise. Could someone please guide me on: The prerequisites required for this integration. The exact steps to follow for a successful setup and comprehensive data coverage. Any detailed insights or documentation links would be greatly appreciated. and please let me know when it required to use Splunk dashboards or visualization apps for NetScaler data   Thank you! Splunk Add-on for Citrix NetScaler 
Hi,   I have three license keys for Splunk SOAR and Splunk UBA, each valid for one year. While I am able to install the keys on both SOAR and UBA, I would like to verify all the keys I have install... See more...
Hi,   I have three license keys for Splunk SOAR and Splunk UBA, each valid for one year. While I am able to install the keys on both SOAR and UBA, I would like to verify all the keys I have installed, identify which key is currently active, and check their expiration dates.   Thank you
@sarathi125 FYI: Although you have a solution, using join is not a Splunk way of doing things, joining data sets should really be done using stats, it's faster, more efficient and does not have the l... See more...
@sarathi125 FYI: Although you have a solution, using join is not a Splunk way of doing things, joining data sets should really be done using stats, it's faster, more efficient and does not have the limitations of join, which will silently discard results if the join subsearch exceeds 50,000 results - this may not be an issue in your case, but it's good practice to get your head around using stats to achieve joins. I also recommend you sort out the automatic field extraction so that you don't have to manually extract jobId - which then means you can use the fields in subsearches and only then have to make a single search.    
Hi @bowesmana, With the below query able to achieve what I have tried to get,  Thank you for your input.   index="<index>" (source="user1" OR source="user2") "The transaction reference id is" | r... See more...
Hi @bowesmana, With the below query able to achieve what I have tried to get,  Thank you for your input.   index="<index>" (source="user1" OR source="user2") "The transaction reference id is" | rex field=_raw "\"jobId\":\s?\"(?<jobId>[a-fA-F0-9\-]+)\"" | join jobId [ search index="<index>" (source="user1" OR source="user2") ("<ProcessName>" AND "Exception occurred") | rex field=_raw "\"jobId\":\s?\"(?<jobId>[a-fA-F0-9\-]+)\"" | table jobId, _time, _raw ] | table _time, jobId, _raw    
Thanks! Albeit abit slow and unresponsive i got some results NB action event_id mx_status operation portfolio_entity portfolio_name sky_id trade_type tradebooking_sgp 0   0 LIV... See more...
Thanks! Albeit abit slow and unresponsive i got some results NB action event_id mx_status operation portfolio_entity portfolio_name sky_id trade_type tradebooking_sgp 0   0 LIVE sgp usa usaeod ABC Korea ABC Panema ... ... A USD AOU ... ... ... 12345678 VanillaSwap ... ... YYYY/MM/DD HH:MM:SS ...                                                            
Aside from the limits for base search results, using a base search to hold large numbers will often NOT improve performance because you are taking lots of results from perhaps multiple indexers, wher... See more...
Aside from the limits for base search results, using a base search to hold large numbers will often NOT improve performance because you are taking lots of results from perhaps multiple indexers, where you are benefiting from parallelism, and sticking them on the search head, where you only have the CPU of the single search head to then process all those results - also competing for CPU with other users  of that search head. Note that the comments about doing this in the base search ... | stats count as Total, count(eval(httpStatusCde!="200" OR statusCde!="0000")) as failures, exactperc95(respTime) as p95RespTime by _time EId followed by a post process search doing | search EId="5eb2aee9" | stats count as Total, count(failures) as failures, first(p95RespTime) as p95RespTime by _time ... is not quite right, as you don't need another stats, because you are just getting the information calculated in the base stats, but filtering out only the EId you want. However, a point to note about stats + stats is that the second stats would not do stats COUNT, but stats sum(Total), i.e. if you wanted to get the total for EId without regard to _time, you could do something like this... | search EId="5eb2aee9" | stats sum(Total) as Total, sum(failures) as failures, min(p95RespTime) as min_p95RespTime max(p95RespTime) as max_p95RespTime avg(p95RespTime) as avg_p95RespTime ...  
Can you clarify what you did to get the "search time enrichment". Did you create an automatic lookup or are you using a lookup to enrich the data in your search SPL or are you doing something else? ... See more...
Can you clarify what you did to get the "search time enrichment". Did you create an automatic lookup or are you using a lookup to enrich the data in your search SPL or are you doing something else? If you change your lookup, then the lookup results will change, so I am not sure what you mean by "real time enrichment". The principle of a CSV lookup is to give you data from the lookup file based on a field or fields in an event. That principle would give you "search time" AND "real time" enrichment, as they would be one and the same thing.  
In your screenshot, the field jobId had a lower case J, whereas you're using JobId - field names are case sensitive. Also when you use simple spath to extract all fields, they will have the JSON hier... See more...
In your screenshot, the field jobId had a lower case J, whereas you're using JobId - field names are case sensitive. Also when you use simple spath to extract all fields, they will have the JSON hierarchy in their field names, i.e. the jobId is the field Properties.jobId, not jobId Also, this is all achievable without using append, so try the subsearch to do the constraints for the outer
If you run the search that gives you that output in Verbose mode, you will see the fields that are automatically extracted. If jobId is a field that is automatically extracted, then you should write... See more...
If you run the search that gives you that output in Verbose mode, you will see the fields that are automatically extracted. If jobId is a field that is automatically extracted, then you should write a basic search that looks for all the jobIds you want - you tried to do that with your rex statement, but you actually included the text "jobId:..." in the dynamic_text, you actually want the jobId data without "jobId:". As @isoutamo says, if jobId is NOT auto-extracted, then use spath to get it and then do the stats on the jobId, e.g. this is the SUBSEARCH - which if you run it on its own will return a single field called jobId with all the jobIds you want.   index="<indexname>" source = "user1" OR source = "user2" where "<ProcessName>" "Exception occurred" | spath Properties.jobId ``` This uses spath to extract the jobId ``` | search Properties.jobId!=null | stats values(jobId) AS jobId ]   Then use this as the subsearch to the outer search and it will then find all records that have a jobId matching the ones you are selecting. Note that if your jobId is NOT auto extracted, then you cannot make a search for jobId=X, so you will need to either configure Splunk to auto extract the JSON or create a calculated field with this type of expression   | eval jobId=spath(_raw, "Properties.jobId")   which will mean jobId will always be a field in your data for search, so you won't have to use the spath expression in your search
What do you get when you try something like this? index=sky sourcetype=sky_trade_murex_timestamp OR sourcetype=mx_to_sky ``` Parse sky_trade_murex_timestamp events (note that trade_id is put directl... See more...
What do you get when you try something like this? index=sky sourcetype=sky_trade_murex_timestamp OR sourcetype=mx_to_sky ``` Parse sky_trade_murex_timestamp events (note that trade_id is put directly into the NB field) ``` | rex field=_raw "trade_id=\"(?<NB>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" ``` Parse mx_to_sky events ``` | rex field=_raw "(?<NB>\d+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" ``` Reduce to just the fields of interest ``` | fields sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO ``` "Join" events by NB using stats ``` | stats values(*) as * by NB
Hi      I have deployed Splunk enterprise and my logs are getting ingested into the indexer. Now i have created an app for enriching the logs with additional fields from a csv file. I have deployed ... See more...
Hi      I have deployed Splunk enterprise and my logs are getting ingested into the indexer. Now i have created an app for enriching the logs with additional fields from a csv file. I have deployed the app by making configuration changes in props.conf and transforms.conf and i am able to view search time enrichment. But my requirement is real time enrichment as my csv file would change every 2 days. Can anyone provide a sample configuration for props.conf and transforms.conf for real time enrichment of logs with fields from csv based on match with one of the fields of the logs. Regards
  Hello everyone, I’m trying to send SPAN traffic from a single interface (ens35) to Splunk Enterprise using the Splunk Stream forwarder in independent mode. The Splunk Stream forwarder and the sea... See more...
  Hello everyone, I’m trying to send SPAN traffic from a single interface (ens35) to Splunk Enterprise using the Splunk Stream forwarder in independent mode. The Splunk Stream forwarder and the search head appear to be connected properly, but I’m not seeing any of the SPAN traffic in Splunk. In the stmfwd.log, I see the following error: (CaptureServer.cpp:2032) stream.CaptureServer - NetFlow receiver configuration is not set in streamfwd.conf. NetFlow data will not be captured. Please update streamfwd.conf to include correct NetFlow receiver configuration. However, I’m not trying to capture NetFlow data; I only want to capture the raw SPAN traffic. Here is my streamfwd.conf: [streamfwd] httpEventCollectorToken = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx indexer.1.uri = http://splunk-indexer:8088 indexer.2.uri = http://splunk-indexer2:8088 streamfwdcapture.1.interface = ens35 Why is the SPAN traffic not being forwarded to Splunk? How can I configure Splunk Stream properly so that it captures and sends the SPAN traffic to my indexers without any NetFlow setup? Thank you!