All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, I need to upgrade the Splunk Universal Forwarder (UF) on AIX 7.2 from version 8.2.9 to 9.4.3. However, after attempting the upgrade, the Splunk UF crashes immediately upon startup. Could yo... See more...
Hi All, I need to upgrade the Splunk Universal Forwarder (UF) on AIX 7.2 from version 8.2.9 to 9.4.3. However, after attempting the upgrade, the Splunk UF crashes immediately upon startup. Could you please provide the proper upgrade steps and let me know if there are any known limitations or compatibility issues with this upgrade? Thanks in advance for your help.
Thanks, and sorry I was not clear enough. I want to do this with props and transforms so that the fields are reusable.
Hi @uagraw01  I just skim thorugh it can you try appendcols command insteaed of append let us know if that works 
Awesome , but shows 0 results. 
Hello Splunkers!! I want to combined both the queries by using append but it doesnot work. its always giving me only one section of the results. Please help me to fix it. (index=si_error sour... See more...
Hello Splunkers!! I want to combined both the queries by using append but it doesnot work. its always giving me only one section of the results. Please help me to fix it. (index=si_error source=scada (error_status=CAME_IN OR error_status=WENT_OUT) (_time=Null OR NOT virtual)) | fields - _raw | fields + area, zone, equipment, element, isc_id, error, error_status, start_time | search (area="*"), (zone="*"), (equipment="*"), (isc_id="*") | eval _time=exact(if(isnull(start_time),'_time',max(start_time,earliest_epoch))), _virtual_=if(isnull(virtual),"N","Y"), _cd_=replace('_cd',".*:","") | sort 0 -_time _virtual_ -"_indextime" -_cd_ | dedup isc_id error _time | fields - _virtual_, _cd_ | fillnull value="" element | sort 0 -_time -"_indextime" | streamstats window=2 global=false current=true earliest(_time) AS start latest(_time) AS stop, count AS count by area zone equipment element error | search error_status=CAME_IN | lookup isc id AS isc_id OUTPUTNEW statistical_subject mark_code | lookup new_ctcl_21_07.csv JoinedAttempt1 AS statistical_subject, mis_address AS error OUTPUTNEW description, operational_rate, technical_rate, alarm_severity | fillnull value=0 technical_rate operational_rate | fillnull value="-" alarm_severity mark_code | eval description=coalesce(description,("Unknown text for error number " . error)), error_description=((error . "-") . description), location=((mark_code . "-") . isc_id), stop=if((count == 1),null,stop), start=exact(coalesce(start_time,'_time')), start_window=max(start,earliest_epoch), stop_window=min(stop,if((latest_epoch > now()),now(),latest_epoch)), duration=round(exact((stop_window - start_window)),3) | fields + start, error_description, isc_id, duration, stop, mark_code, technical_rate, operational_rate, alarm_severity , area, zone, equipment | dedup isc_id error_description start | sort 0 start isc_id error_description asc | eval operational_rate=(operational_rate * 100), technical_rate=(technical_rate * 100) ,"Start time"= strftime(start,"%d-%m-%Y %H:%M:%S"), "Stop time (within window)"= strftime(stop,"%d-%m-%Y %H:%M:%S"), "Duration (within window)"=tostring(duration,"duration") | dedup "Start time","Stop time (within window)", isc_id, error_description, mark_code | search NOT error_description="*Unknown text for error*" | search technical_rate>* AND operational_rate>* (alarm_severity="*") (mark_code="*") | rename error_description as "Error ID", isc_id as Location, mark_code as "Mark code", technical_rate as "Technical %", operational_rate as "Operational %", alarm_severity as Severity | lookup mordc_Av_full_assets.csv Area as area, Zone as zone, Section as equipment output TopoID | lookup mordc_topo ID as TopoID output Description as Area | search Area="Depalletizing, Decanting" | stats count as Scada_count by Area | table Scada_count Search 2: index=internal_statistics_1h [| inputlookup internal_statistics | where (step="Defoil and decanting" OR step="Defoil and depalletising") AND report="Throughput" AND level="step" AND measurement IN("Case") | fields id | rename id AS statistic_id] | eval value=coalesce(value, sum_value) | fields statistic_id value group_name location | eval _virtual_=if(isnull(virtual), "N", "Y"), _cd_=replace(_cd, ".*:", "") | sort 0 -_time _virtual_ -"_indextime" -_cd_ | dedup statistic_id _time group_name | fields - _virtual_ _cd_ | lookup internal_statistics id AS statistic_id OUTPUTNEW report level step measurement | stats sum(value) AS dda_count  
Thanks for the reply , yes I do get counts by time, but how can I can just VPN data that has a signature="WebVPN" and action="failure" ? 
Hi @stavush  Splunk has not publicly committed to an OIDC GA timeline. You could try and contact Splunk Support/your Splunk account team for roadmap details under NDA. In the meantime, there is an ... See more...
Hi @stavush  Splunk has not publicly committed to an OIDC GA timeline. You could try and contact Splunk Support/your Splunk account team for roadmap details under NDA. In the meantime, there is an idea already raised for this which is getting traction so its worth upvoting this! https://ideas.splunk.com/ideas/EID-I-300  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Splunk rarely announces future features.  We won't know it's coming until it's here. Consider going to https://ideas.splunk.com to request it.
Hi @tlopes  This is a hard-limit: There is no resolution or workaround to get past this limitation ( Chars limit is hardcoded ) other than splitting the value into multiple fields and then concat t... See more...
Hi @tlopes  This is a hard-limit: There is no resolution or workaround to get past this limitation ( Chars limit is hardcoded ) other than splitting the value into multiple fields and then concat the fields together during the search. Check https://splunk.my.site.com/customer/s/article/Understanding-the-Maximum-Allowable-Length-for-Indexed-Fields-in-Metric-Index for more info.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I deleted the duplicate post for you.
I'm trying to ingest some metrics with very long attribute values and the length of "<dim_name>::<dim_value>" seems to be limited to 2048 characters - anything beyond that gets truncated. Is there a ... See more...
I'm trying to ingest some metrics with very long attribute values and the length of "<dim_name>::<dim_value>" seems to be limited to 2048 characters - anything beyond that gets truncated. Is there a way to increase this limit?
Hello, I would like to know if there are any plans for Splunk to support OIDC (in addition to SAML) If so, is there a roadmap or estimated timeline for this support? Thank you
@livehybrid Thanks for pointing it out. Yes that apply to classic only.  @Sudhagar As @livehybrid  mentioned you can use row.event_id.value(For specific field value) or value(for any clicked value) ... See more...
@livehybrid Thanks for pointing it out. Yes that apply to classic only.  @Sudhagar As @livehybrid  mentioned you can use row.event_id.value(For specific field value) or value(for any clicked value) or name(field name of the clicked value) in dsahboard studio.   "options": { "tokens": [ { "key": "value", "token": "eventid" } ] }  
Hi @hl  If you're getting 0 results from that query, but you are getting results in the pivot then it sounds like one of the fields you are using for the filter is not quite right.  I think the cor... See more...
Hi @hl  If you're getting 0 results from that query, but you are getting results in the pivot then it sounds like one of the fields you are using for the filter is not quite right.  I think the correct query should be: | tstats count from datamodel=Network_Sessions.All_Sessions where nodename=All_Sessions.VPN All_Sessions.action=failure All_Sessions.signature="WebVPN" by _time span=1h The action and signature belong to "All_Sessions"   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @fongpen  This is interesting as the 201 response and the fact the ticket is actually created suggests that what is being sent to the API is correct, its just that the API returns something inval... See more...
Hi @fongpen  This is interesting as the 201 response and the fact the ticket is actually created suggests that what is being sent to the API is correct, its just that the API returns something invalid which the TA cannot process, either a non-JSON string or an invalid JSON string.  I think at this point it will need to go to the app developers to investigate, unless you want to start tweaking the script yourself to output the exact response from the API?! Since the app is supported by Splunk I think the best action would be to log a support case and explain the actions you've been through already, providing them the logs that you've shared here, hopefully they can send through for their dev team to investigate and remediate. You can log a support case via https://www.splunk.com/support  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Here is a full working example for your to try:   { "title": "TokenSet", "description": "", "inputs": { "input_global_trp": { "options": { "defau... See more...
Here is a full working example for your to try:   { "title": "TokenSet", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } }, "visualizations": { "global": { "showProgressBar": true } } }, "visualizations": { "viz_6cm12FqM": { "options": { "markdown": "eventid: $eventid$" }, "type": "splunk.markdown" }, "viz_Fba9zdzF": { "dataSources": { "primary": "ds_60Uo5lG8" }, "eventHandlers": [ { "options": { "tokens": [ { "key": "row.event_id.value", "token": "eventid" } ] }, "type": "drilldown.setToken" } ], "options": {}, "type": "splunk.table" } }, "dataSources": { "ds_60Uo5lG8": { "name": "Search_1", "options": { "query": "| makeresults count=5 \n| streamstats count\n| eval msg=\"Test message\".tostring(count)\n| eval event_id=md5(msg)" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_6cm12FqM", "position": { "h": 40, "w": 360, "x": 20, "y": 20 }, "type": "block" }, { "item": "viz_Fba9zdzF", "position": { "h": 120, "w": 1360, "x": 20, "y": 60 }, "type": "block" } ], "type": "absolute" } }, "options": {}, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Sudhagar  The issue here is your "token" field value - it should be either key, value or row.<fieldName>.value, such as: "eventHandlers": [ { "options": { ... See more...
Hi @Sudhagar  The issue here is your "token" field value - it should be either key, value or row.<fieldName>.value, such as: "eventHandlers": [ { "options": { "tokens": [ { "key": "row.event_id.value", "token": "eventid" } ] }, "type": "drilldown.setToken" } ], NOT just the field name as you have in your example. @PrewinThomas Regarding $click.value$ - Doesnt this only apply to classic XML dashboards?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @thahir  This information is incorrect, also this isnt two minor/patch versions, its a major version (8->9). Interestingly if you ask several AI models the same question it also says that its su... See more...
Hi @thahir  This information is incorrect, also this isnt two minor/patch versions, its a major version (8->9). Interestingly if you ask several AI models the same question it also says that its supported (and sometimes links to the upgrade page that says it isnt!) - Im not saying your response was from an AI response as such, but its easy for mis-information to spread as truth which is why I'm pointing this out. For clarity - the supported upgrade path should be 8.1.14 -> 9.0.9 -> 9.2.8 -> 9.4.x. See https://docs.splunk.com/Documentation/Splunk/9.4.2/Installation/HowtoupgradeSplunk and https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/9.2/upgrade-or-migrate-splunk-enterprise/how-to-upgrade-splunk-enterprise   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @igor5212  TL;DR: upgrade path should be 8.1.14 -> 9.0.9 -> 9.2.8 -> 9.4.x The advice from @thahir here is not correct so please be careful ("upgrading from Splunk 8.1.14 to 9.4 is supported, as... See more...
Hi @igor5212  TL;DR: upgrade path should be 8.1.14 -> 9.0.9 -> 9.2.8 -> 9.4.x The advice from @thahir here is not correct so please be careful ("upgrading from Splunk 8.1.14 to 9.4 is supported, as Splunk supports direct upgrades between any two minor/patch versions") - This is not the case. Please see https://docs.splunk.com/Documentation/Splunk/9.4.2/Installation/HowtoupgradeSplunk which states to update to 9.4 you need to be on 9.1/9.2. To upgrade to 9.2 from 8.1.x you first need to upgrade to 9.0 (See https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/9.2/upgrade-or-migrate-splunk-enterprise/how-to-upgrade-splunk-enterprise) Therefore your upgrade path should be 8.1.14 -> 9.0.9 -> 9.2.8 -> 9.4.x  You can get older binaries/packages as required by using https://github.com/livehybrid/downloadSplunk or I can add them here if you let me know which packages you need.  As @PrewinThomas mentioned, another option would be to copy the configuration from the 8.1 server to a new installation using 9.4 *however* please note that this really depends on your configuration and generally is only advised for forwarders, even then any checkpoint data for any inputs or KV stores may mean you face issues with re-ingesting data or failed data collections. If you are upgrading SH/IDX then I would strongly suggest following the supported upgraded path as there are changes to things like the indexes which cannot be made manually.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
@Sudhagar  What's your actual field name? Is it eventid or event_id?. Also can you test with below(to get your actual clicked value)   { "type": "drilldown.setToken", "options": { "tokens": [ { "... See more...
@Sudhagar  What's your actual field name? Is it eventid or event_id?. Also can you test with below(to get your actual clicked value)   { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "event_id", "value": "$click.value$" } ] } }   Then use markdown to test { "type": "splunk.markdown", "options": { "markdown": "**Selected Event ID:** $event_id$", "fontColor": "#ffffff", "fontSize": "custom", "customFontSize": 25 } } Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!