All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dashboard panels have recently stopped being a consistent size across the row introducing grey space on our boards. This is happening throughout the app on Splunk Cloud Version:9.2.2403.111. Does an... See more...
Dashboard panels have recently stopped being a consistent size across the row introducing grey space on our boards. This is happening throughout the app on Splunk Cloud Version:9.2.2403.111. Does anyone know of any changes or settings which may have affected this and how it can be resolved?  Thanks  
On a new fresh deployment of O11y, we are following the guide from the setup wizard and running the following helm install command   helm install splunk-otel-collector --set="cloudProvider=aws,dist... See more...
On a new fresh deployment of O11y, we are following the guide from the setup wizard and running the following helm install command   helm install splunk-otel-collector --set="cloudProvider=aws,distribution=eks,splunkObservability.accessToken=xxxx-xxxxx,clusterName=eks-test-cluster,splunkObservability.realm=eu2,gateway.enabled=true,splunkPlatform.endpoint=https://my-splunk-cloud-hec-input,splunkPlatform.token=my-token,splunkObservability.profilingEnabled=true,environment=test,operator.enabled=true,agent.discovery.enabled=true" splunk-otel-collector-chart/splunk-otel-collector   However this fails with the following error: Error: INSTALLATION FAILED: template: splunk-otel-collector/templates/operator/instrumentation.yaml:2:4: executing "splunk-otel-collector/templates/operator/instrumentation.yaml" at <include "splunk-otel-collector.operator.validation-rules" .>: error calling include: template: splunk-otel-collector/templates/operator/_helpers.tpl:17:13: executing "splunk-otel-collector.operator.validation-rules" at <.Values.instrumentation.exporter.endpoint>: nil pointer evaluating interface {}.endpoint This seems to be because  _helpers.tpl is expecting a value for instrumentation.exporter.endpoint however the value according to the chart (and the documentation) is instrumentation.endpoint https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/templates/operator/_helpers.tpl Line 13 is where it is mentioned. We have tried providing instrumentation.exporter.endpoint as an additional parameter - but instead get the error: Values don't meet the specifications of the schema(s) in the following chart(s): splunk-otel-collector: - instrumentation: Additional property exporter is not allowed  (Which is true - instrumentation.exporter.endpoint is not defined in here: https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/templates/operator/instrumentation.yaml ) line 20 We also get the same error if we provide a complete values.yaml file with both formats of the instrumentation endpoint defined.    It looks like _helpers.tpl was edited to include this endpoint specification about a month ago, so surely we can not be the first people to be tripped up by this? Is there anything else I can try or do we need to wait for the operator to be fixed?  
Does Splunk on Prem or cloud have a solution that allows users to be an Analyst when doing that role and sign in or elevate to Admin when carrying out Admin tasks.  This is something that many standa... See more...
Does Splunk on Prem or cloud have a solution that allows users to be an Analyst when doing that role and sign in or elevate to Admin when carrying out Admin tasks.  This is something that many standards require such as ISO27001 and others.  I hope someone can help or if not I will make the feature request.
Hi, brother, I have encountered a confusing problem regarding Postgres data integration. I want to execute this statement every two minutes to retrieve the latest results from the database, so I us... See more...
Hi, brother, I have encountered a confusing problem regarding Postgres data integration. I want to execute this statement every two minutes to retrieve the latest results from the database, so I use the following statement for filtering. SELECT history_uint.itemid, history_uint.value, interface.ip, items.name, hosts.host, TO_TIMESTAMP(history_uint.clock) AS clock_datetime FROM history_uint JOIN items ON history_uint.itemid = items.itemid JOIN hosts ON items.hostid = hosts.hostid JOIN interface ON interface.hostid = hosts.hostid WHERE history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) AND items.flags = 4 AND (items.name LIKE '%Bits sent%' OR items.name LIKE '%Bits received%') AND (hosts.host LIKE 'CN%' OR hosts.host LIKE 'os%') ORDER BY history_uint.clock DESC LIMIT 90000;  This SQL statement executes perfectly in database tools However, this SQL statement cannot be executed in dbxquery, and the error is unknown I found that the key reason is the following SQL statement: Among them, history_uint.clock>(FLOOR (EXTRACT (EPOCH) FROM NOW())) - 120) When I replaced (FLOOR (EXTRACT (EPOCHFAM NOW()) -120) with Unix timestamps, everything was fine I tried to replace it with other SQL statements, but they all failed. Please help me analyze this issue together. thank you. Add some environmental information: Postgres version: 14.9 Java version 21.0.5 DB connet version 3.18.1 Postgres JDBC version 1.2.1 thank you
嗨,兄弟, 我遇到了一个关于 postgres 数据集成的令人困惑的问题。 我想每两分钟执行一次此语句,以便从数据库中获取最新结果,因此我使用以下语句进行筛选。 SELECT history_uint.itemid,history_uint.value,interface.ip,items.name,hosts.host,TO_TIMESTAMP (history_uin... See more...
嗨,兄弟, 我遇到了一个关于 postgres 数据集成的令人困惑的问题。 我想每两分钟执行一次此语句,以便从数据库中获取最新结果,因此我使用以下语句进行筛选。 SELECT history_uint.itemid,history_uint.value,interface.ip,items.name,hosts.host,TO_TIMESTAMP (history_uint.clock) AS clock_datetime FROM history_uint JOIN items ON history_uint.itemid = items.itemid JOIN hosts ON items.hostid = hosts.hostid JOIN interface ON interface.hostid = hosts.hostid WHERE history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) AND items.flags = 4 AND (items.name LIKE '%Bits sent%' OR items.name LIKE '%Bits received%') AND (hosts.host LIKE 'CN%' OR hosts.host LIKE 'os%') ORDER BY history_uint.clock DESC LIMIT 90000; 此 SQL 语句在数据库工具中完美执行 但是,此 SQL 语句无法在 dbxquery 中执行,错误未知 我发现关键原因是以下 SQL 语句: 其中 history_uint.clock > (FLOOR(EXTRACT(EPOCH FROM NOW())) - 120) 当我用 Unix 时间戳替换 1 时,一切都很好 我尝试用其他 SQL 语句替换它,但它们都失败了。 请帮我一起分析这个问题。谢谢。 添加一些环境信息: Postgres 版本:14.9 java 版本 21.0.5 DB connet 版本 3.18.1 postgres JDBC 版本 1.2.1 谢谢    
In Splunk Cloud for one of my client environment, I'm seeing below message. TA-pps_ondemand Error: KV Store is disabled. Please enable it to start the data collection. Please help me with suitable ... See more...
In Splunk Cloud for one of my client environment, I'm seeing below message. TA-pps_ondemand Error: KV Store is disabled. Please enable it to start the data collection. Please help me with suitable solution.
Hey all, 2 part question here... I'm using the MLTK Smart Forecasting tool to model hard drive free space (time vs hard drive space). Currently the y-axis (i.e. free space %/free space MB) is auto... See more...
Hey all, 2 part question here... I'm using the MLTK Smart Forecasting tool to model hard drive free space (time vs hard drive space). Currently the y-axis (i.e. free space %/free space MB) is automatically adjusting its range to the graph produced by the forecast. I would want it to instead run from 0-100 (in the case of Free Space %) and be something reasonable for the Free Space MB-line. By extension, how would I get the graph to "tell or show" me where the x-intercept would be from the prediction and/or confidence interval (i.e. tell me the date the hard drive would theoretically run out of space; where Free Space %/MB = 0)? Attached is an image of the current output visualization should this help.  
Hi There,  I got issue Drill-down and Next Step are not read in Incident Review, i create Splunk Lab for Research And Development by myself. I just install Splunk Enterprise and Enterprise Security ... See more...
Hi There,  I got issue Drill-down and Next Step are not read in Incident Review, i create Splunk Lab for Research And Development by myself. I just install Splunk Enterprise and Enterprise Security (nothing another external apps) and i ingest DVWA to my Splunk. As you know DVWA has various vulnerabilities, and I want to utilize this as a log that I will then manage in Splunk. Therefore, I made a rule regarding uploading inappropriate files. The query is like this    index=lab_web sourcetype="apache:access" | rex field=_raw "\[(?<Time>[^\]]+)\] \"(?<Method>\w+) (?<Path>/DVWA/vulnerabilities/upload/[^/]+\.\w+) HTTP/1.1\" (?<Status>\d{3}) \d+ \"(?<Referer>[^\"]+)\" \"(?<UserAgent>[^\"]+)\"" | eval FileName = mvindex(split(Path, "/"), -1) | eval FullPath = "http://localhost" . Path | where match(FileName, "\.(?!jpeg$|png$)[a-zA-Z0-9]+$") | table Time, FileName, FullPath, Status   In that correlation, I added notables that were filled in from the drill-down and also the next step.  But why when I enter the incident review, the drill-down and next steps that I created are not readable? Maybe there is an application that I haven't installed or something else? I will attach my full correlation setting include with notable, drill-down, and Next Steps.   Splunk Enterprise Version : 9.3.1 Enterprise Security Version : 7.3.2
I'm trying to format timestamps in a table in dashboard studio. The original times are values such as: 2024-10-29T10:13:35.16763423-04:00 That is the value I see if I don't add a specific format.... See more...
I'm trying to format timestamps in a table in dashboard studio. The original times are values such as: 2024-10-29T10:13:35.16763423-04:00 That is the value I see if I don't add a specific format. If I add a format to the column : "YYYY-MM-DD HH:mm:ss.SSS Z" it is formatted as: 2024-10-29 10:13:35.000 -04:00 Why are the millisecond values zero? Here is the section of the source code for reference: "visualizations": { "viz_mfPU11Bg": { "type": "splunk.table", "dataSources": { "primary": "ds_xfeyRsjD" }, "options": { "count": 8, "columnFormat": { "Start": { "data": "> table | seriesByName(\"Start\") | formatByType(StartColumnFormatEditorConfig)" } } }, "context": { "StartColumnFormatEditorConfig": { "time": { "format": "YYYY-MM-DD HH:mm:ss.SSS Z" } } } } }, Any ideas what I'm doing wrong? Thanks, Andrew
The stream app can save pcaps using the configure packet stream. I was able to get packets saved using just IPs. Now I want to search for content based on some snort rules. For ascii content I am try... See more...
The stream app can save pcaps using the configure packet stream. I was able to get packets saved using just IPs. Now I want to search for content based on some snort rules. For ascii content I am trying to create new target using the field: content/contains by just putting in an ascii word. For hex values, there are no instructions. Do I use escape characters \x01\x02...., |01 02 ...| or a regular expression? Is there an example.
When creating an incident for a specific server, we want to include a link to that entity in IT Essentials Work however the URL appears to only be accessible using the entity_key.    Is there any si... See more...
When creating an incident for a specific server, we want to include a link to that entity in IT Essentials Work however the URL appears to only be accessible using the entity_key.    Is there any simple way to get the URL directly to an entity from the hostname or is it required to get the entity_key from the kvstore itsi_entities then combine that into the url?    In  Splunk App for Infrastructure, you could simply use the host name in the URL, but I cannot find any way to do this with ITEW.   Example URL:  https://<stack>.splunkcloud.com/en-US/app/itsi/entity_detail?entity_key=82570f87-9544-47c8-bc6g-e030c522barb Looking to see if there's a way to do something like this:  https://<stack>.splunkcloud.com/en-US/app/itsi/entity_detail?host=<hostname>   
Hi team, I have been experiencing issues with log ingestion in a Windows Server and I was hoping to get some advice. The files are generated in a mainframe and transmitted onto a local share in... See more...
Hi team, I have been experiencing issues with log ingestion in a Windows Server and I was hoping to get some advice. The files are generated in a mainframe and transmitted onto a local share in a windows server via TIBCO Jobs. The files are generated in 9 windows throughout the day -  3 files at a time varying in size from a few Mb to up to 3Gb. The solution has worked fine in lower environments, likely, because of looser file/folder restrictions, but in PROD, only one or two files per window get ingested. The logs in indicate that Splunk can't open or read the files:   The running theory is that the process that is writing the files to the disk is locking them so Splunk can't read them.  I'm currently reviewing the permission sets for the TIBCO Service Account and the Local System Account (Splunk UF runs as this account) in the lower environments to try and spot any differences that could be causing the issue - based on the information in the post below: https://community.splunk.com/t5/All-Apps-and-Add-ons/windows-file-locking/m-p/14126 in addition to that, I was exploring the possibility of user the "monitornohandle" stanza as it seems to fit the use case I am dealing with - monitor single files that don't get updated frequently. But I haven't been able to determine, based on documentation, if I can use wildcards in the filename - for reference, this is the documentation I'm referring to: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Data/Monitorfilesanddirectorieswithinputs.conf#MonitorNoHandle.2C_single_Windows_file I'd appreciate if I could get any insights from the community either regarding permission or the use of the "monitornohandle" input stanza. Thanks in advance,
Putting together a query that shows, on an individual alert level, the number of times the alert fired in a day and the average we were expecting.  Below is the query as it stands now, but I am looki... See more...
Putting together a query that shows, on an individual alert level, the number of times the alert fired in a day and the average we were expecting.  Below is the query as it stands now, but I am looking for a way to only show records from today/yesterday, instead of for the past 30 days.   Any help would be appreciated index=_audit action="alert_fired" earliest=-30d latest=now | eval date=strftime(_time, "%Y-%m-%d") | stats count AS actual_triggered_alerts by ss_name date | eventstats avg(actual_triggered_alerts) AS average_triggered_alerts by ss_name | eval average_triggered_alerts = round(average_triggered_alerts,0) | eval comparison = case( actual_triggered_alerts = average_triggered_alerts, "Average", actual_triggered_alerts > average_triggered_alerts, "Above Average", actual_triggered_alerts < average_triggered_alerts, "Below Average") | search comparison!="Average" | table date ss_name actual_triggered_alerts average_triggered_alerts | rename date as "Date", ss_name as "Alert Name", actual_triggered_alerts as "Actual Triggered Alerts", average_triggered_alerts as "Average Triggered Alerts"
Is the List of Clients Displayed on the Forwarder Managment Console Synchronized with other Splunk Servers?  We have two Splunk Deployment Servers, and Cluster Manager that show the same list.  Previ... See more...
Is the List of Clients Displayed on the Forwarder Managment Console Synchronized with other Splunk Servers?  We have two Splunk Deployment Servers, and Cluster Manager that show the same list.  Previously the DS only showed the clients it was actively connected with.  Did this feature get added in 9.2 when the DS was updated?
I have the following props which works fine in the "Add Data" GUI and a test file of logs: EVENT_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s EVENT_BREAKER_ENABLE = true LINE_BREAKER = ([\r\n]+)\... See more...
I have the following props which works fine in the "Add Data" GUI and a test file of logs: EVENT_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s EVENT_BREAKER_ENABLE = true LINE_BREAKER = ([\r\n]+)\<.+\>\w{2,4}\s\d{1,2}\s MAX_TIMESTAMP_LOOKAHEAD = 25 SHOULD_LINEMERGE = false TIME_FORMAT = %d-%b-%Y %H:%M:%S.%3N TIME_PREFIX = named\[.+\]\:\s TRUNCATE = 99999 TZ = US/Eastern I am trying to pull milliseconds from the log using the 2nd timestamp. <30>Oct 30 11:31:39 172.1.1.1 named[18422]: 30-Oct-2024 11:31:39.731 client 1.1.1.1#1111: view 10: UDP: query: 27b9eb69be0574d621235140cd164f.test.com IN A response: NOERROR +EDV 27b9eb69be0236356140cd164f.test.com. 30 IN CNAME waw-test.net.; waw-mvp.test.net. 10 IN A 41.1.1.1; test.net. 10 IN A 1.1.1.1; test.net. 10 IN A 1.1.1.1; test.net. 10 IN A 1.1.1.1;   I have this loaded on the indexers and search heads.   But it is still pulling from the first timestamp. A btool on the indexers shows this line that I have not configured.: DATETIME_CONFIG = /etc/datetime.xml   Is this what is screwing me up? Thank you!
Afternoon, Splunkers! Timechart is really frothing my coffee today. When putting in the parameters for a timechart, it always cuts off the latest time value. For example, if I give it a time window... See more...
Afternoon, Splunkers! Timechart is really frothing my coffee today. When putting in the parameters for a timechart, it always cuts off the latest time value. For example, if I give it a time window of four hours with a span of 1h, I get a total of four data points: 12:00:00 13:00:00 14:00:00 15:00:00 I didn't ask for four data points, I asked for the data points from 12:00 to 16:00. And in this particular example, no, 16:00 isn't a time that hasn't arrived yet or only has partial data; it does this with any time range I pick, at any span setting. Now, I can work around this by programming the dashboard to add 1 second to the <latest> time for the time range. Not that huge of a deal. However, I'm left with a large void on the right-hand side of the time range. Is there anyway I can fix this, either by forcing the timechart to show me the whole range or by hiding the empty range?
I have two query in splunk query 1 and query 2 and an input. Based on the input, i need to execute either query 1 or query 2. I am trying something like below query but it is not working for me. ... See more...
I have two query in splunk query 1 and query 2 and an input. Based on the input, i need to execute either query 1 or query 2. I am trying something like below query but it is not working for me.   | makeresults | eval myInput="*" | append [ search "my search related to query 1" | rex field=_raw "Job id : (?<job_id>[^,]+)" | where myInput="*" | eval query_type="query1" | table job_id, query_type, myInput ] | append [ search "my search related to query 2" | rex field=_raw "Job id : (?<job_id>[^,]+)" | where myInput!="*" | eval query_type="query2" | table job_id, query_type, myInput ]  
Splunk version 9.0.8/9.1.3/9.2.x and above has added capability to process key value pairs that will be added at index time on all events flowing through the input.  Now it's possible to "tag" a... See more...
Splunk version 9.0.8/9.1.3/9.2.x and above has added capability to process key value pairs that will be added at index time on all events flowing through the input.  Now it's possible to "tag" all data coming into a particular HEC token. HEC will support all present and future inputs.conf.spec configs(_meta/TCP_ROUTING/SYSLOG_ROUTING/queue etc.).
Hi, a few days ago, I installed the UF in an AIX server but it had some details, such as the service running, but then it stops after some time . The version from UF is 9.0.5
Hello, everyone. I'm looking to achieve this in Splunk Modern Dashboard; It seems native and intuitive to normal Splunk search, but the paint brush is not available in Dashboard, and I found no ... See more...
Hello, everyone. I'm looking to achieve this in Splunk Modern Dashboard; It seems native and intuitive to normal Splunk search, but the paint brush is not available in Dashboard, and I found no useful option in the edit tools. Could this be achieved via source code or any other way? Thanks to anyone who may help