All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

50k is the limit on subsearch when used with join command. The "normal" subsearch limit is much lower - it's 10k results.
OK. So this is the second case I mentioned. How do you decide then if it's a single session or two separate sessions? Are the events occuring repeatedly while the user is logged in?
Hello Would anyone know whether it is possible to migrate an on-prem smartstore to Splunk Cloud? How would that happen ? Thank you !
It's hard to see, but what is need is for the "Message": line to be the breaking line and for the "TimeStamp': line to be the first line of the whole event. "Message": "User query failed: Connection... See more...
It's hard to see, but what is need is for the "Message": line to be the breaking line and for the "TimeStamp': line to be the first line of the whole event. "Message": "User query failed: Connection ID: 55, User: piadmin, User ID: 1, Point ID: 247000, Type: summary, Start: 14-Jun-24 07:54:50, End: 14-Jun-24 07:56:20, Mode: 5, Status: [-11059] No Good Data For Calculation",-------event break here "TimeStamp": "\/Date(1718366180157)\/",  ----event start here In the example I sent it's hard to see the break after message and before Timestamp clearly because they look like one big line.      
We are looking to integrate Splunk SIEM with our microservice, we are looking to send events from service to Splunk and then configure alerts based on eventType.  As we understand there are 2 approa... See more...
We are looking to integrate Splunk SIEM with our microservice, we are looking to send events from service to Splunk and then configure alerts based on eventType.  As we understand there are 2 approaches Universal Log forwarder and HTTP Event collector. We are inclining more towards using HEC as it has the ability to send ack for events as well and challenge with Universal Log forwarder is that it needs to be managed by customer where Splunk will be running and volume of the events is also not that much. Can someone help us in understanding cost involved in both approaches and scaling of HEC is number of events increases due to a spike. Also should we go with building a Technology Add-on or app which can be used along with Splunk Enterprise Security. We want to implement this for Enterprise as well as Cloud. #SplunkAddOnbuilder
  | rex "(?<head1>[^,]*),(?<head2>[^,]*),(?<head3>[^,]*),(?<head4>[^,]*),(?<head5>[^,]*),(?<head6>[^,]*),(?<head7>[^,]*),(?<head8>[^,]*),(?<head9>[^,]*),(?<head10>[^,]*),(?<head11>[^,]*),(?<head12>[... See more...
  | rex "(?<head1>[^,]*),(?<head2>[^,]*),(?<head3>[^,]*),(?<head4>[^,]*),(?<head5>[^,]*),(?<head6>[^,]*),(?<head7>[^,]*),(?<head8>[^,]*),(?<head9>[^,]*),(?<head10>[^,]*),(?<head11>[^,]*),(?<head12>[^,]*)" The fields will be null so you could use fillnull to give them values e.g. | fillnull value="N/A"  
Hi there, for better visibility i built a dashboard for indexer restarts, this dashboard is based on the _internal index and the /var/log/messages from the indexers themself. I would like to ad... See more...
Hi there, for better visibility i built a dashboard for indexer restarts, this dashboard is based on the _internal index and the /var/log/messages from the indexers themself. I would like to add the Info how the restart was triggered. so i can see whether the restart came from the manager (WebUI: Configuration Bundle Actions) or was done via the cli. Does Splunk log this? If yes where do i find that info? Thanks in advance!
if it shows no results, how can i make it so that the value of that 'epoch' value = OK versus 'Not Ok'  
I hve few events where data is not available. Instead I see commas where head6 and head7 data is not availble. Need rex so that I get output blank if no data but if data is available then it should p... See more...
I hve few events where data is not available. Instead I see commas where head6 and head7 data is not availble. Need rex so that I get output blank if no data but if data is available then it should provide output. below is the event (three commas beside between UNKNOWN AND /TEST)   head1,head2,head3,head4,head5,head6,head7,head8,head9,head10,head11,head12 sadfasdfafasdfs,2024-06 21T01:33:30.918000+00:00,test12,1,UNKNOWN,,,/test/rrr/swss/customer1/454554/test.xml,UNKNOWN,PASS,2024-06-21T01:33:30.213000+00:00,UNKNOWN
Doing an EVAL in STATS has made by day @ITWhisperer 
Subsearches are usually limited to 50k events so an alltime subsearch it likely to have been (silently) terminated. Given that your index and source type are the same, try removing the subsearch ind... See more...
Subsearches are usually limited to 50k events so an alltime subsearch it likely to have been (silently) terminated. Given that your index and source type are the same, try removing the subsearch index=ndx sourcetype=src (device="PM4" OR device="PM2") earliest=0 latest=@d | bucket _time span=1d | stats max(eval(if(device="PM4",value,null()))) as PM4Val max(eval(if(device="PM2",value,null()))) as PM2Val by _time index  
Typically most of extraction is taking place in search time so the most important thing about field extraction is that the format is consistent and can be easily configured (so you don't have cases l... See more...
Typically most of extraction is taking place in search time so the most important thing about field extraction is that the format is consistent and can be easily configured (so you don't have cases like escaped characters). From indexing performance point of view it's most important that the format is consistent across the whole sourcetype, the data breaks easily into separate events and that the timestamp is well-defined and hopefully placed at the beginning of the event. If you have all this and your sourcetype has the so-called great eight properly configured, you're good to go. From the practical point of view regarding parsing the data - avoid any nesting - like "real" data as somehow-formatted string within a json structure or the other way around - json structure with a syslog header, any escaped strings within strings and so on - it makes writing extractions and searches a painful experience.
Given your sample data, the extraction does work, as shown by this runanywhere example: | makeresults | eval _raw="{ \"eventTime\": \"2024-06-24T06:15:42Z\", \"leaduuid\": \"1234455\", \"CrmId\": \"... See more...
Given your sample data, the extraction does work, as shown by this runanywhere example: | makeresults | eval _raw="{ \"eventTime\": \"2024-06-24T06:15:42Z\", \"leaduuid\": \"1234455\", \"CrmId\": \"11111111\", \"studentCrmUuid\": \"634543564\", \"externalId\": \"\", \"SiteId\": \"xxxx\", \"subCategory\": \"\", \"category\": \"Course Enquiry\", \"eventId\": \"\", \"eventRegistrationId\": \"\", \"status\": \"Open\", \"source\": \"Online Enquiry\", \"leadId\": \"22222222\", \"assignmentStatusCode\": \"\", \"assignmentStatus\": \"\", \"isFirstLead\": \"yes\", \"c4cEventId\": \"\", \"channelPartnerApplication\": \"no\", \"applicationReceivedDate\": \"\", \"referredBy\": \"\", \"referrerCounsellor\": \"\", \"createdBy\": \"Technical User\", \"lastChangedBy\": \"Technical User\" , \"leadSubAgentID\": \"\", \"cancelReason\": \"\"}, \"offersInPrinciple\": {\"offersinPrinciple\": \"no\", \"oipReferenceNumber\": \"\", \"oipVerificationStatus\": \"\"}, \"qualification\": {\"qualification\": \"Unqualified\", \"primaryFinancialSource\": \"\"}, \"online\": {\"referringUrl\": \"\", \"idpNearestOffice\": \"\", \"sourceSiteId\": \"xxxxx\", \"preferredCounsellingMode\": \"\", \"institutionInfo\": \"\", \"courseName\": \"\", \"howDidYouHear\": \"Social Media\"}" | rex "\"CrmId\": \"(?<CrmId>[^\"]+).*\"status\": \"(?<status>[^\"]+).*\"source\": \"(?<source>[^\"]+).*\"leadId\": \"(?<leadId>[^\"]+).*\"isFirstLead\": \"(?<isFirstLead>[^\"]+).*\"offersinPrinciple\": \"(?<offersinPrinciple>[^\"]+).*\"sourceSiteId\": \"(?<sourceSiteId>[^\"]+).*\"howDidYouHear\": \"(?<howDidYouHear>[^\"]+)" Please provide more details on what exactly is "not working", and more examples of your events demonstrating the failure.
Hi @hamed.khosrawi, It's been a few days and the Community has not jumped in. Have you happened to find a solution or anything new you can share? If you are still looking for help, reach out to ... See more...
Hi @hamed.khosrawi, It's been a few days and the Community has not jumped in. Have you happened to find a solution or anything new you can share? If you are still looking for help, reach out to Sales: https://www.appdynamics.com/company/contact-us
1. Where are you putting those configs? 2. Do you use indexed extractions?
Here's a sample in classic, still can't figure out how to count the number of selected items.   <form version="1.1" theme="light"> <label>test2</label> <fieldset submitButton="false"> <inpu... See more...
Here's a sample in classic, still can't figure out how to count the number of selected items.   <form version="1.1" theme="light"> <label>test2</label> <fieldset submitButton="false"> <input type="multiselect" token="element" searchWhenChanged="true"> <label>Fruit Select</label> <choice value="a">Apple</choice> <choice value="b">Banana</choice> <choice value="c">Coconut</choice> <choice value="d">Dragonfruit</choice> <choice value="e">Elderberry</choice> <choice value="f">Fig</choice> <choice value="g">Grape</choice> </input> </fieldset> <row> <panel> <single> <title>Number of selected fruit</title> <search> <query>| makeresults | stats count($element$) as selected_total | table selected_total</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> </form>  
The field is available at the level at which it was defined. So if you have - for example - the Network_Traffic datamodel, all fields are defined at the root level - the All_Traffic node. So the prop... See more...
The field is available at the level at which it was defined. So if you have - for example - the Network_Traffic datamodel, all fields are defined at the root level - the All_Traffic node. So the proper search would be | tstats count from datamodel=Network_Traffic.All_Traffic where nodename=All_Traffic.Traffic_By_Action.Allowed_Traffic by All_Traffic.src_ip But as the Performance datamodel has some fields defined at "lower" levels, you can do - for example - | tstats count from datamodel=Performance.All_Performance where nodename=All_Performance.OS.Timesync by All_Performance.OS.Timesync.action  
Hi all I have a search that works for a range of a few days (eg earliest=-7d@d), but when running for alltime it breaks. I suspect this is an issue with appendcols or streamstats? Any pointers would... See more...
Hi all I have a search that works for a range of a few days (eg earliest=-7d@d), but when running for alltime it breaks. I suspect this is an issue with appendcols or streamstats? Any pointers would be appreciated. I'm using this to generate a lookup which I can then search instead of using an expensive alltime. index=ndx sourcetype=src (device="PM4") earliest=0 latest=@d | bucket _time span=1d | stats max(value) as PM4Val by _time index | appendcols [ search index=ndx sourcetype=src (device="PM2") earliest=0 latest=@d | bucket _time span=1d | stats max(value) as PM2Val by _time index ] | streamstats current=f last(PM4Val) as LastPM4Val last(PM2Val) as LastPM2Val by index | eval PM4ValDelta = PM4Val - LastPM4Val, PM2ValDelta = PM2Val - LastPM2Val | table _time, index, PM4Val, PM4ValDelta, PM2Val, PM2ValDelta | sort index -_time  
@PickleRick & @ITWhisperer thank you both for the replies, I understand there are other topics in the forum but all rely as you mentioned on a login/logout field, which is not present in my raw data... See more...
@PickleRick & @ITWhisperer thank you both for the replies, I understand there are other topics in the forum but all rely as you mentioned on a login/logout field, which is not present in my raw data, this is why I am calculating based on if there are events.   Sample of raw data:   Jun 24 15:01:20 10.50.8.100 1 2024-06-24T15:01:20+03:00 pafw01.company.com.sa - - - - 1,2024/06/24 15:01:19,007959000163983,TRAFFIC,end,2561,2024/06/24 15:01:19,192.168.44.43,10.130.11.2,0.0.0.0,0.0.0.0,GP-Access-Organization-Services-Applications,company\user1,,ssl,vsys1,GP-VPN,Trust,tunnel.21,ethernet1/4,splunk-forwarding,2024/06/24 15:01:19,1269402,1,61723,443,0,0,0x47a,tcp,allow,33254,13498,19756,210,2024/06/24 14:36:36,1454,White-List,,7352086992805546250,0x0,192.168.0.0-192.168.255.255,10.0.0.0-10.255.255.255,,105,105,tcp-rst-from-client,0,0,0,0,,pafw01,from-policy,,,0,,0,,N/A,0,0,0,0,09a8fe83-e848-4cbb-bdff-0d35a4ce96b2,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2024-06-24T15:01:20.681+03:00,,,encrypted-tunnel,networking,browser-based,4,"used-by-malware,able-to-transfer-file,has-known-vulnerability,tunnel-other-application,pervasive-use",,ssl,no,no,0    
Forwarder is an active component in event's path so every connection from/to the forwarder has its own settings and should not affect other connections from/to it. You can have a forwarder receiving ... See more...
Forwarder is an active component in event's path so every connection from/to the forwarder has its own settings and should not affect other connections from/to it. You can have a forwarder receiving encrypted and compressed data and sending it unencrypted and uncompressed and vice versa. (although it's not recommended of course to send it not TLS-protected). Anyway, if you're using TLS, useClientSSLCompression is enabled by default (but you can still explicitly enable it). If you're not using TLS, with modern forwarders if one of the connection ends has compression enabled, the endpoints should negotiate compression on the link. (of course we're talking about s2s, not some syslog forwarding).