All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes I do - I get the input time range and use the earliest and latest functions 
That's an interesting case because generally the UF should have nothing to do with how the ausearch operates. It just spawns a child process, runs the script and whether ausearch does something succe... See more...
That's an interesting case because generally the UF should have nothing to do with how the ausearch operates. It just spawns a child process, runs the script and whether ausearch does something successfully or not is really its own responsibility. What I would try in case there is a difference - do a dump of environment variables and compare the environment from when you're spawning your script as an input with the one you're getting when the script is run by hand.
Obvious things first - you're not having a hardcoded earliest/latest parameters in your search, do you?
"It seems like no one responsible for this product actually looks at the questions". No, they might not. And there's a good reason for that - this is a community-driven forum. People who have spare t... See more...
"It seems like no one responsible for this product actually looks at the questions". No, they might not. And there's a good reason for that - this is a community-driven forum. People who have spare time and are willing to help others lurk here and sometimes respond to the questions they know answers to. Doesn't mean that: 1) Splunk employees, especially those you want, are active here 2) People who active questions here have knowledge about your particular problem. People who have no idea what you're asking about typically don't respond because they don't want to create pointless noise. If you want a binding response from Splunk itself, don't post questions on Answers, use an official channel - typically raise a support case or contact your sales contact (depending on the problem at hand).  
Before you jump into tstats, try simple | from datamodel:Network_Sessions.VPN | search action=failure signature=WebVPN and check if you get any results. If you do, it means there's something wrong... See more...
Before you jump into tstats, try simple | from datamodel:Network_Sessions.VPN | search action=failure signature=WebVPN and check if you get any results. If you do, it means there's something wrong with your syntax. (I spot at least one typo - "vpn" as dataset name must be uppercase)
Append uses a subsearch. Subsearches have their limits. That's one thing. Most probably, especially since you're doing a lot of funky stuff like sorting, your subsearch simply takes too much time and... See more...
Append uses a subsearch. Subsearches have their limits. That's one thing. Most probably, especially since you're doing a lot of funky stuff like sorting, your subsearch simply takes too much time and is silently finalized. Another thing - those searches are probably suboptimal (don't know your data but they don't seem right in some places). I'm always cautious if I see dedup and too many sortings. Also - you're listing a bunch of fields | fields statistic_id value group_name location Then use a field not listed (and not being a default field like _raw and _time) | eval _virtual_=if(isnull(virtual), "N", "Y"), _cd_=replace(_cd, ".*:", "") And you must not use field names beginning with underscore for your own fields - they are reserved for Splunk's internal fields.
punct is (if it's generated, because its creation can be disabled) an indexed field like any other so you can use it. But the question is what do you mean by "conditional" extraction.
Your AIX version seems to be supported so generally it should ran. The upgrade procedure is no rocket science - https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwar... See more...
Your AIX version seems to be supported so generally it should ran. The upgrade procedure is no rocket science - https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwarder-manual/9.3/upgrade-or-uninstall-the-universal-forwarder/upgrade-the-universal-forwarder#upgrade-a-single-forwarder-0 One caveat though - you must use GNU tar for extracting the installation archive, not AIX tar.
Hi All, I need to upgrade the Splunk Universal Forwarder (UF) on AIX 7.2 from version 8.2.9 to 9.4.3. However, after attempting the upgrade, the Splunk UF crashes immediately upon startup. Could yo... See more...
Hi All, I need to upgrade the Splunk Universal Forwarder (UF) on AIX 7.2 from version 8.2.9 to 9.4.3. However, after attempting the upgrade, the Splunk UF crashes immediately upon startup. Could you please provide the proper upgrade steps and let me know if there are any known limitations or compatibility issues with this upgrade? Thanks in advance for your help.
Thanks, and sorry I was not clear enough. I want to do this with props and transforms so that the fields are reusable.
Hi @uagraw01  I just skim thorugh it can you try appendcols command insteaed of append let us know if that works 
Awesome , but shows 0 results. 
Hello Splunkers!! I want to combined both the queries by using append but it doesnot work. its always giving me only one section of the results. Please help me to fix it. (index=si_error sour... See more...
Hello Splunkers!! I want to combined both the queries by using append but it doesnot work. its always giving me only one section of the results. Please help me to fix it. (index=si_error source=scada (error_status=CAME_IN OR error_status=WENT_OUT) (_time=Null OR NOT virtual)) | fields - _raw | fields + area, zone, equipment, element, isc_id, error, error_status, start_time | search (area="*"), (zone="*"), (equipment="*"), (isc_id="*") | eval _time=exact(if(isnull(start_time),'_time',max(start_time,earliest_epoch))), _virtual_=if(isnull(virtual),"N","Y"), _cd_=replace('_cd',".*:","") | sort 0 -_time _virtual_ -"_indextime" -_cd_ | dedup isc_id error _time | fields - _virtual_, _cd_ | fillnull value="" element | sort 0 -_time -"_indextime" | streamstats window=2 global=false current=true earliest(_time) AS start latest(_time) AS stop, count AS count by area zone equipment element error | search error_status=CAME_IN | lookup isc id AS isc_id OUTPUTNEW statistical_subject mark_code | lookup new_ctcl_21_07.csv JoinedAttempt1 AS statistical_subject, mis_address AS error OUTPUTNEW description, operational_rate, technical_rate, alarm_severity | fillnull value=0 technical_rate operational_rate | fillnull value="-" alarm_severity mark_code | eval description=coalesce(description,("Unknown text for error number " . error)), error_description=((error . "-") . description), location=((mark_code . "-") . isc_id), stop=if((count == 1),null,stop), start=exact(coalesce(start_time,'_time')), start_window=max(start,earliest_epoch), stop_window=min(stop,if((latest_epoch > now()),now(),latest_epoch)), duration=round(exact((stop_window - start_window)),3) | fields + start, error_description, isc_id, duration, stop, mark_code, technical_rate, operational_rate, alarm_severity , area, zone, equipment | dedup isc_id error_description start | sort 0 start isc_id error_description asc | eval operational_rate=(operational_rate * 100), technical_rate=(technical_rate * 100) ,"Start time"= strftime(start,"%d-%m-%Y %H:%M:%S"), "Stop time (within window)"= strftime(stop,"%d-%m-%Y %H:%M:%S"), "Duration (within window)"=tostring(duration,"duration") | dedup "Start time","Stop time (within window)", isc_id, error_description, mark_code | search NOT error_description="*Unknown text for error*" | search technical_rate>* AND operational_rate>* (alarm_severity="*") (mark_code="*") | rename error_description as "Error ID", isc_id as Location, mark_code as "Mark code", technical_rate as "Technical %", operational_rate as "Operational %", alarm_severity as Severity | lookup mordc_Av_full_assets.csv Area as area, Zone as zone, Section as equipment output TopoID | lookup mordc_topo ID as TopoID output Description as Area | search Area="Depalletizing, Decanting" | stats count as Scada_count by Area | table Scada_count Search 2: index=internal_statistics_1h [| inputlookup internal_statistics | where (step="Defoil and decanting" OR step="Defoil and depalletising") AND report="Throughput" AND level="step" AND measurement IN("Case") | fields id | rename id AS statistic_id] | eval value=coalesce(value, sum_value) | fields statistic_id value group_name location | eval _virtual_=if(isnull(virtual), "N", "Y"), _cd_=replace(_cd, ".*:", "") | sort 0 -_time _virtual_ -"_indextime" -_cd_ | dedup statistic_id _time group_name | fields - _virtual_ _cd_ | lookup internal_statistics id AS statistic_id OUTPUTNEW report level step measurement | stats sum(value) AS dda_count  
Thanks for the reply , yes I do get counts by time, but how can I can just VPN data that has a signature="WebVPN" and action="failure" ? 
Hi @stavush  Splunk has not publicly committed to an OIDC GA timeline. You could try and contact Splunk Support/your Splunk account team for roadmap details under NDA. In the meantime, there is an ... See more...
Hi @stavush  Splunk has not publicly committed to an OIDC GA timeline. You could try and contact Splunk Support/your Splunk account team for roadmap details under NDA. In the meantime, there is an idea already raised for this which is getting traction so its worth upvoting this! https://ideas.splunk.com/ideas/EID-I-300  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Splunk rarely announces future features.  We won't know it's coming until it's here. Consider going to https://ideas.splunk.com to request it.
Hi @tlopes  This is a hard-limit: There is no resolution or workaround to get past this limitation ( Chars limit is hardcoded ) other than splitting the value into multiple fields and then concat t... See more...
Hi @tlopes  This is a hard-limit: There is no resolution or workaround to get past this limitation ( Chars limit is hardcoded ) other than splitting the value into multiple fields and then concat the fields together during the search. Check https://splunk.my.site.com/customer/s/article/Understanding-the-Maximum-Allowable-Length-for-Indexed-Fields-in-Metric-Index for more info.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I deleted the duplicate post for you.
I'm trying to ingest some metrics with very long attribute values and the length of "<dim_name>::<dim_value>" seems to be limited to 2048 characters - anything beyond that gets truncated. Is there a ... See more...
I'm trying to ingest some metrics with very long attribute values and the length of "<dim_name>::<dim_value>" seems to be limited to 2048 characters - anything beyond that gets truncated. Is there a way to increase this limit?
Hello, I would like to know if there are any plans for Splunk to support OIDC (in addition to SAML) If so, is there a roadmap or estimated timeline for this support? Thank you