All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Currently using Dashboard classic and added Markdown Text to the bottom of my pie chart to inform the user when the data was last update.  Is there a way in the Markdown Text to format job.lastUpdate... See more...
Currently using Dashboard classic and added Markdown Text to the bottom of my pie chart to inform the user when the data was last update.  Is there a way in the Markdown Text to format job.lastUpdated? It is currently showing in Zulu. I was also thinking of putting it in the description field, if possible.    
We currently have a search that shows a timeline graph of daily SVC usage by index. 10 of these indexes are our highest for SVC usage. I would like to create an alert if the SVC usage for any of thos... See more...
We currently have a search that shows a timeline graph of daily SVC usage by index. 10 of these indexes are our highest for SVC usage. I would like to create an alert if the SVC usage for any of those indexes goes 25% higher or lower than the normal amount. Example: index=test normally uses 100 to 140 SVC per day. the alert will tell us when that index goes 25% over 140 or under 100. We want the search to do this for at least our top 10 SVC usage indexes.  Our current timechart search is as follows: index=svc_summary | timechart limit=10 span=1d useother=f sum(svc_usage) by Indexes
We have a search app that a group of users are working from. All of the users have power role and we have given the power role write permissions into the app. When they try to share some saved search... See more...
We have a search app that a group of users are working from. All of the users have power role and we have given the power role write permissions into the app. When they try to share some saved searches, they are getting this error:   User 'XXXX' with roles { XXX-power, XXX-user } cannot write: /nobody/XXX_Splunk_app/savedsearches/ Test { read : [ XXX-power, XXX-user ], write : [ XXX-admin ] }, export: app, owner: nobody, removable: no, modtime: 1754574650.678659000   From what I read online, once a user is given Write permissions into the App, they can share their KOs. Am I doing something wrong here or has this since changed?  
I have this regex - ^(?:[^ \\n]* ){7}(?P<src_host>[^ ]+)[^:\\n]*:\\s+(?P<event_id>[a-f0-9]+:\\d+)(?:[^/\\n]*/){2}(?P<dest_zone>[^\\.]+) I put it in the field extraction with the right sourcetype ... See more...
I have this regex - ^(?:[^ \\n]* ){7}(?P<src_host>[^ ]+)[^:\\n]*:\\s+(?P<event_id>[a-f0-9]+:\\d+)(?:[^/\\n]*/){2}(?P<dest_zone>[^\\.]+) I put it in the field extraction with the right sourcetype as inline field extraction, and it still won't show the extracted fields when searched.  _internal shows that its status is - "applied" Any idea why?
Hello, I'm trying to install https://splunkbase.splunk.com/app/5022 in a Splunk Cloud instance. If I download the app file and try to install it manually, I get this error: "This app is available ... See more...
Hello, I'm trying to install https://splunkbase.splunk.com/app/5022 in a Splunk Cloud instance. If I download the app file and try to install it manually, I get this error: "This app is available for installation directly from Splunkbase. To install this app, use App Browser page in Splunk Web" In Splunk - Find more apps - I searched for HTTP, HTTP Alert, Brendan's name... and the app is not showing up. Could anyone advise whether I'm doing something wrong or how I can get this app installed? Thanks in advance
We are looking to upgrade our Splunk instance to the latest version.  I would like to download install manuals for Spunk Enterprise v10 as well as other documents.  I noticed on the new documents por... See more...
We are looking to upgrade our Splunk instance to the latest version.  I would like to download install manuals for Spunk Enterprise v10 as well as other documents.  I noticed on the new documents portal there is no longer an option of downloadable PDFs for the material.  Has anyone else encountered this?  Is this no longer an option with the new portal?  Appreciate any insight.
Hi guys, I'm trying to put a 3d visualization that i've made with three.js in my splunk dashboard, but it doesn't work. I've put my main.js in .../appserver/static and the html in an html in my das... See more...
Hi guys, I'm trying to put a 3d visualization that i've made with three.js in my splunk dashboard, but it doesn't work. I've put my main.js in .../appserver/static and the html in an html in my dashboard. Any docs/recommendations? Thanks, Alecanzo.
Hi all I'm building a distributed Splunk architecture with: 1 Search Head 2 Indexers (not in a cluster) 1 Heavy Forwarder (HF) to route logs from Universal Forwarders (UFs) I want to route... See more...
Hi all I'm building a distributed Splunk architecture with: 1 Search Head 2 Indexers (not in a cluster) 1 Heavy Forwarder (HF) to route logs from Universal Forwarders (UFs) I want to route logs to different indexers based on the index name, for example: Logs from AD servers should go to indexer01, using index=ad_index Logs from File servers should go to indexer02, using index=fs_index Here is my current config on the HF  props.conf [default] TRANSFORMS-routing = route_to_index02 transforms.conf [route_to_index02] REGEX = ^fs_index$|^ad_index$ DEST_KEY = _TCP_ROUTING FORMAT = index02 outputs.conf [tcpout] [tcpout:index01] server = <IP>:9997 [tcpout:index02] server = <IP>:9997 And here is the example inputs.conf from AD Server [WinEventLog://Security] disabled = 0 index = ad_index sourcetype = WinEventLog:Security [WinEventLog://System] disabled = 0 index = ad_index sourcetype = WinEventLog:System But right now, everything is going to index02, regardless of the index name. So my question is  How can I modify props.conf and transforms.conf on the HF so that: ad_index logs go to index01 fs_index logs go to index02 Thank in advance for any help
In Dashboard Studio for ITSI, we have enabled the Infrastructure AddOn and the ServiceMap, but I am wondering what other types of data sources that can be added? For example, I would like to be ab... See more...
In Dashboard Studio for ITSI, we have enabled the Infrastructure AddOn and the ServiceMap, but I am wondering what other types of data sources that can be added? For example, I would like to be able to connect to the Kubernetes API to run kubectl commands, etc. This way we would be able to display the current settings for Kubernetes deploys such as Auto Scaling config, etc. This is how the data sources currently is configured. In this list we would like to be able to add more types of data sources. Any ideas on this?
Hi everyone! I’m currently working on a Splunk SOAR on-premises deployment and evaluating its performance using an AWS EC2 t3.xlarge instance (4 vCPU, 16 GB RAM, EBS-backed storage). I’d love you... See more...
Hi everyone! I’m currently working on a Splunk SOAR on-premises deployment and evaluating its performance using an AWS EC2 t3.xlarge instance (4 vCPU, 16 GB RAM, EBS-backed storage). I’d love your input on the following: What would be a recommended build configuration (CPU, RAM, disc) to support this kind of usage in playbooks? Does allowing multiple users to run playbooks simultaneously change the sizing recommendations? Any experience with tuning playbook runners or autoscaling settings to handle user-driven playbook execution effectively? Any advice or sizing tips from your deployments would be much appreciated. Thanks in advance!
hello i have a search and i want only latest result of this search . ok so the problem is for 1 DeviceName there are multiple SensorHealthState  , now it was Inactive but latest event shows that the ... See more...
hello i have a search and i want only latest result of this search . ok so the problem is for 1 DeviceName there are multiple SensorHealthState  , now it was Inactive but latest event shows that the device is active now . But this search shows Inactive . How can I get latest result . index=endpoint_defender source="AdvancedHunting-DeviceInfo" | dedup DeviceName | search DeviceType=Workstation OR DeviceType= Server | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" | rename code as 3-Letter-Code | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUTNEW "Company Code" | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT "Company Code" as 4LetCode | lookup lkp-GlobalIpRange.csv 3-Letter-Code OUTPUT Region as Region | eval Region=mvindex('Region',0) , "4LetCode"=mvindex('4LetCode',0) | rename "3-Letter-Code" as CC | search DeviceName="bie-n1690.emea.duerr.int" | search SensorHealthState = "active" OR  SensorHealthState = "Inactive" OR SensorHealthState = "Misconfigured" OR SensorHealthState = "Impaired communications" OR SensorHealthState = "No sensor data" | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState
We are using SC4S to collect local logs from FortiAnalyzer. We've noticed a error: the timestamp within the log file does not match the event time in Splunk. This delay is causing a issue: when ... See more...
We are using SC4S to collect local logs from FortiAnalyzer. We've noticed a error: the timestamp within the log file does not match the event time in Splunk. This delay is causing a issue: when logs are first sent from FortiAnalyzer, they are not immediately searchable in Splunk. Instead, they only become searchable 7 hours later. This problem appears to be isolated to the FortiAnalyzer local logs. All other log sources collected via SC4S are working correctly, even the log forwarded to FortiAnalyzer then to Splunk, with their log timestamps and event times matching perfectly. How can we resolve this issue?
Hi Community,   I'm trying to configure BMC Helix under Apps in SOAR for RESTAPI but unable to connect; it keeps giving either 404 or 200 error. Can anyone let me know how can I configure the BMC ... See more...
Hi Community,   I'm trying to configure BMC Helix under Apps in SOAR for RESTAPI but unable to connect; it keeps giving either 404 or 200 error. Can anyone let me know how can I configure the BMC App using Token instead of credentials and configure Non-SSO Authentication?
Hi All, I need to upgrade the Splunk Universal Forwarder (UF) on AIX 7.2 from version 8.2.9 to 9.4.3. However, after attempting the upgrade, the Splunk UF crashes immediately upon startup. Could yo... See more...
Hi All, I need to upgrade the Splunk Universal Forwarder (UF) on AIX 7.2 from version 8.2.9 to 9.4.3. However, after attempting the upgrade, the Splunk UF crashes immediately upon startup. Could you please provide the proper upgrade steps and let me know if there are any known limitations or compatibility issues with this upgrade? Thanks in advance for your help.
Hello Splunkers!! I want to combined both the queries by using append but it doesnot work. its always giving me only one section of the results. Please help me to fix it. (index=si_error sour... See more...
Hello Splunkers!! I want to combined both the queries by using append but it doesnot work. its always giving me only one section of the results. Please help me to fix it. (index=si_error source=scada (error_status=CAME_IN OR error_status=WENT_OUT) (_time=Null OR NOT virtual)) | fields - _raw | fields + area, zone, equipment, element, isc_id, error, error_status, start_time | search (area="*"), (zone="*"), (equipment="*"), (isc_id="*") | eval _time=exact(if(isnull(start_time),'_time',max(start_time,earliest_epoch))), _virtual_=if(isnull(virtual),"N","Y"), _cd_=replace('_cd',".*:","") | sort 0 -_time _virtual_ -"_indextime" -_cd_ | dedup isc_id error _time | fields - _virtual_, _cd_ | fillnull value="" element | sort 0 -_time -"_indextime" | streamstats window=2 global=false current=true earliest(_time) AS start latest(_time) AS stop, count AS count by area zone equipment element error | search error_status=CAME_IN | lookup isc id AS isc_id OUTPUTNEW statistical_subject mark_code | lookup new_ctcl_21_07.csv JoinedAttempt1 AS statistical_subject, mis_address AS error OUTPUTNEW description, operational_rate, technical_rate, alarm_severity | fillnull value=0 technical_rate operational_rate | fillnull value="-" alarm_severity mark_code | eval description=coalesce(description,("Unknown text for error number " . error)), error_description=((error . "-") . description), location=((mark_code . "-") . isc_id), stop=if((count == 1),null,stop), start=exact(coalesce(start_time,'_time')), start_window=max(start,earliest_epoch), stop_window=min(stop,if((latest_epoch > now()),now(),latest_epoch)), duration=round(exact((stop_window - start_window)),3) | fields + start, error_description, isc_id, duration, stop, mark_code, technical_rate, operational_rate, alarm_severity , area, zone, equipment | dedup isc_id error_description start | sort 0 start isc_id error_description asc | eval operational_rate=(operational_rate * 100), technical_rate=(technical_rate * 100) ,"Start time"= strftime(start,"%d-%m-%Y %H:%M:%S"), "Stop time (within window)"= strftime(stop,"%d-%m-%Y %H:%M:%S"), "Duration (within window)"=tostring(duration,"duration") | dedup "Start time","Stop time (within window)", isc_id, error_description, mark_code | search NOT error_description="*Unknown text for error*" | search technical_rate>* AND operational_rate>* (alarm_severity="*") (mark_code="*") | rename error_description as "Error ID", isc_id as Location, mark_code as "Mark code", technical_rate as "Technical %", operational_rate as "Operational %", alarm_severity as Severity | lookup mordc_Av_full_assets.csv Area as area, Zone as zone, Section as equipment output TopoID | lookup mordc_topo ID as TopoID output Description as Area | search Area="Depalletizing, Decanting" | stats count as Scada_count by Area | table Scada_count Search 2: index=internal_statistics_1h [| inputlookup internal_statistics | where (step="Defoil and decanting" OR step="Defoil and depalletising") AND report="Throughput" AND level="step" AND measurement IN("Case") | fields id | rename id AS statistic_id] | eval value=coalesce(value, sum_value) | fields statistic_id value group_name location | eval _virtual_=if(isnull(virtual), "N", "Y"), _cd_=replace(_cd, ".*:", "") | sort 0 -_time _virtual_ -"_indextime" -_cd_ | dedup statistic_id _time group_name | fields - _virtual_ _cd_ | lookup internal_statistics id AS statistic_id OUTPUTNEW report level step measurement | stats sum(value) AS dda_count  
I'm trying to ingest some metrics with very long attribute values and the length of "<dim_name>::<dim_value>" seems to be limited to 2048 characters - anything beyond that gets truncated. Is there a ... See more...
I'm trying to ingest some metrics with very long attribute values and the length of "<dim_name>::<dim_value>" seems to be limited to 2048 characters - anything beyond that gets truncated. Is there a way to increase this limit?
Hello, I would like to know if there are any plans for Splunk to support OIDC (in addition to SAML) If so, is there a roadmap or estimated timeline for this support? Thank you
Hello everyone, could you help me. I have a splunk Heavy Forwarder server, version 8.1.14, it simply forwards data from a closed zone of our network. I need to update it to version 9.4. Judging by... See more...
Hello everyone, could you help me. I have a splunk Heavy Forwarder server, version 8.1.14, it simply forwards data from a closed zone of our network. I need to update it to version 9.4. Judging by the splunk documentation, this is possible if I understood everything correctly. I would like to make a test stand, but I can’t find the splunk 8.1.14 version in Previous Releases of Splunk Enterprise. Maybe someone has a download link?
Hello Team,   I have a panel which is having table visualization when clicked it has to parse value from this panel to another panel's data source (splunk query)  I have tried this by putting in... See more...
Hello Team,   I have a panel which is having table visualization when clicked it has to parse value from this panel to another panel's data source (splunk query)  I have tried this by putting interaction (set tokens) and used the token value in panel2 Panel 1 { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "event_id", "key": "eventid" } ] } } Panel2 Datasource (Splunk query) `citrix_alerts` | fields - Component,Alert_type,Country,level,provider,message,alert_time | search event_id=$eventid$ JSON { "type": "splunk.table", "options": { "backgroundColor": "transparent", "tableFormat": { "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)", "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)", "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)", "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)" } }, "dataSources": { "primary": "ds_pRiJzPOh" }, "showProgressBar": false, "showLastUpdated": false, "context": {} }
Query:  | tstats count from datamodel=Network_Sessions.All_Sessions where nodename=All_Sessions.VPN action=failure vpn.signature="WebVPN" by _time span=1h I'm not understanding something with this ... See more...
Query:  | tstats count from datamodel=Network_Sessions.All_Sessions where nodename=All_Sessions.VPN action=failure vpn.signature="WebVPN" by _time span=1h I'm not understanding something with this datamodel  but my output is always 0 but when I look at in pivot table I can see data from it.