All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @BRFZ , don't use the inputs to select during installation that are enabled in $SPLUNK_HOME\system\local and aren't manageable by Deployment Server. It's better to don't enable these inputs and ... See more...
Hi @BRFZ , don't use the inputs to select during installation that are enabled in $SPLUNK_HOME\system\local and aren't manageable by Deployment Server. It's better to don't enable these inputs and install (manually or by Deployment Server) the Splunk_TA_windows, remembering to enable inputs. In this way, you can also define the index in which these logs are store. Anyway, answering to your question, by default they are in the main index. Ciao. Giuseppe
| lookup cmdb_asset_inventory.csv Reporting_Host as IP_Address | lookup cmdb_asset_inventory.csv Reporting_Host as fqdn_hostname | lookup cmdb_asset_inventory.csv Reporting_Host as hostname | stats c... See more...
| lookup cmdb_asset_inventory.csv Reporting_Host as IP_Address | lookup cmdb_asset_inventory.csv Reporting_Host as fqdn_hostname | lookup cmdb_asset_inventory.csv Reporting_Host as hostname | stats count by Hostname | append [| inputlookup cmdb_asset_inventory.csv | stats count by Hostname] | stats count by Hostname | where count=1
Hello, I installed the forwarder on a Windows machine, and during the installation, I selected the Windows performance monitor to collect performance data. However, I am not sure where to find thi... See more...
Hello, I installed the forwarder on a Windows machine, and during the installation, I selected the Windows performance monitor to collect performance data. However, I am not sure where to find this data in Splunk or which on default index it is stored in.
I have edit the max_upload size from 500 to 8000 but still it cant upload enterprise security. I try this way repeatly and restart splunk everytime i save this configuration, but nothing happen Do ... See more...
I have edit the max_upload size from 500 to 8000 but still it cant upload enterprise security. I try this way repeatly and restart splunk everytime i save this configuration, but nothing happen Do you have other way to install splunk ES on windows? 
Hi @DonBaldini, the only way to correlate heterogeneous data sources is to find a common key and give these values to a common key to use in the stats command. So you need to find this common key t... See more...
Hi @DonBaldini, the only way to correlate heterogeneous data sources is to find a common key and give these values to a common key to use in the stats command. So you need to find this common key that has always a common value between the two data sources. Ciao. Giuseppe
| inputlookup cmdb_asset_inventory.csv | rename Generic_Hostname as Reporting_Host | eval Reporting_Host=lower(Reporting_Host) | stats count by Reporting_Host Hostname Environment Tier3 Operating_Sys... See more...
| inputlookup cmdb_asset_inventory.csv | rename Generic_Hostname as Reporting_Host | eval Reporting_Host=lower(Reporting_Host) | stats count by Reporting_Host Hostname Environment Tier3 Operating_System | join type=left [ search index=prod_syslogfarm | eval Reporting_Host=lower(Reporting_Host) | stats values(Reporting_Host) as Exists by Reporting_Host ] | fillnull Exists value=0 | search Exists = 0   FYI, query to obtain the cmdb_asset_inventory index=cmdb | eval Generic_Hostname=mvappend(Hostname, IP_Address) .   In the cmdb_asset_inventory, a hostname may contain multiple IP addresses(as you see below). Output -  Reporting_Host Hostname Environment Tier3 Operating_System count Exists 1.11.12.13 xyz Production Server Windows Server 2022 1 0 1.0.1.1 xyz Production Server Windows Server 2022 1 0 xyz xyz Production Server Windows Server 2022 1 xyz xyz.abc.com xyz Production Server Windows Server 2022 1 0   I've been able to achieve partial success with the query where Exists=xyz. The challenge I'm facing is that the host "xyz" is reporting with the hostname "xyz", and I'm able to look up this hostname in the inventory. Once a match is found, it should ignore all other combinations since "xyz" in the syslog host is present in the inventory lookup.   I tried my best to explain my requirement, apologies if something above doesn't make sense.  I will try to be more clear.
Hello thank you. Splunk AR app is installed on my iPhone 12 mini. Before I could register device but not the case anymore. Could this be a app. bug ? 
Does count_err have a value for every id you have in your events?
Hi Roberto, Increasing the replica count would help in executing multiple synthetics jobs at the same time. Basically simultaneous execution of Synthetic jobs there by, you can run more jobs at the ... See more...
Hi Roberto, Increasing the replica count would help in executing multiple synthetics jobs at the same time. Basically simultaneous execution of Synthetic jobs there by, you can run more jobs at the same time or more jobs from the same machine.
Hi @ITWhisperer  Thenks for reply count_err is exist in xxx.csv I forgot to mention that when I do that it does appear [inputlookup xxx.csv |search dag_id=**** |table system, time_range, count_er... See more...
Hi @ITWhisperer  Thenks for reply count_err is exist in xxx.csv I forgot to mention that when I do that it does appear [inputlookup xxx.csv |search dag_id=**** |table system, time_range, count_err] but I have to do that in lookup Thank
The solution from yuanliu works, but not for the full json file from https://forecast.solar/ The best way was to use regex field extractor, but... ...next step to get timecharts from this format ... See more...
The solution from yuanliu works, but not for the full json file from https://forecast.solar/ The best way was to use regex field extractor, but... ...next step to get timecharts from this format wont work by regex { "result": { "watts": { "2019-06-22 05:15:00": 17, "2019-06-22 05:30:00": 22, "2019-06-22 05:45:00": 27, ... "2019-06-29 20:15:00": 14, "2019-06-29 20:30:00": 11, "2019-06-29 20:45:00": 7 }, "watt_hours": { "2019-06-22 05:15:00": 0, "2019-06-22 05:30:00": 6, "2019-06-22 05:45:00": 12, ... "2019-06-29 20:15:00": 2545, "2019-06-29 20:30:00": 2548, "2019-06-29 20:45:00": 2550 }, "watt_hours_day": { "2019-06-22": 2626, "2019-06-23": 2918, "2019-06-24": 2526, "2019-06-25": 2866, "2019-06-26": 2892, "2019-06-27": 1900, "2019-06-28": 2199, "2019-06-29": 2550 } }, "message": { "type": "success", "code": 0, "text": "" } }  
Please share the search that you have been trying.
Hi @VijaySrrie , I never tried because I don't still use Dashboard Studio, but you could try to clone your dashboard, choosing Classic dashboard. Ciao. Giuseppe
Hi Team, Is there a easy way to convert Dashboard Studio to Classic dashboard and enable export option?
Yeah, the issue I have is that the problem ID is the only common field but by using problem ID I wouldn't return the unlinked Incident data Thanks
Hi @DonBaldini , I'd use OR to avoid subsearches. Anyway, I suppose that the issue is related to the fact thta you're using the incident field that could be null. Please chech the first eval to fi... See more...
Hi @DonBaldini , I'd use OR to avoid subsearches. Anyway, I suppose that the issue is related to the fact thta you're using the incident field that could be null. Please chech the first eval to find a value for the incident field also for the sourcetype "problem". Ciao. Giuseppe
I am analysing Incident to Problem linkage by doing a search of the Incident table and then using a Join to the Problem to get supporting data for linked problems. Problem I have is with Join I am cl... See more...
I am analysing Incident to Problem linkage by doing a search of the Incident table and then using a Join to the Problem to get supporting data for linked problems. Problem I have is with Join I am close to threshold for time periods for the search to fail I have tried to use multisearch and OR search but I need to retain Incident results where there is no problem linked, hope this makes sense, code I have written... | multisearch [search index=servicenow sourcetype="incident" ] [search index=servicenow sourcetype="problem" ] | eval incident=if(sourcetype="incident",number,null), problem=if(sourcetype="incident",dv_problem_id,dv_number) | stats latest(eval(if(sourcetype="incident",dv_opened_at,null()))) as inc_opened, latest(problem) as problem, latest(eval(if(sourcetype="problem",dv_state,null()))) as prb_state by incident
@ITWhispererI attempted to execute your search by my goal is to identify and output the assets that are present in `myinventory` lookup but absent from the `syslog_farm` index.
I am using the Splunk OTEL Collector Helm chart to send logs from my GKE pods to the Splunk Cloud Platform. I have set `UsesplunkIncludeAnnotation` to `true` to filter logs from specific pods. This s... See more...
I am using the Splunk OTEL Collector Helm chart to send logs from my GKE pods to the Splunk Cloud Platform. I have set `UsesplunkIncludeAnnotation` to `true` to filter logs from specific pods. This setup was working fine until I tried to filter the logs being sent. I added the following configuration to my `splunk` values.yaml: config: processors: filter/ottl: error_mode: ignore logs: log_record: - 'IsMatch(body, "GET /status")' - 'IsMatch(body, "GET /healthcheck")' When I applied this configuration, the specified logs were excluded as expected, but it did not filter logs from the specified pods. I am still receiving logs from all my pods, and the annotation is not taking effect. Additionally, the host is not displaying correctly and is showing as "unknown". (I will attach a screenshot for reference.) My questions are: 1. How can I exclude these specific logs more effectively? 2. Is there a more efficient way to achieve this filtering?
Hello, I used Splunk REST API with Search endpoint to be able to retrieve the latest fired alerts based on a title search. I get the fired alerts in alphabetical order but not in chronological orde... See more...
Hello, I used Splunk REST API with Search endpoint to be able to retrieve the latest fired alerts based on a title search. I get the fired alerts in alphabetical order but not in chronological order since all the alerts obtained have the default field <updated>1970-01-01T01:00:00+01:00</updated>. Here's the url and query I used : https://<host>:<mPort>/services/alerts/fired_alerts?search=name%3DSOC%20-*&&sort_dir=desc&sort_key=updated     | rest /services/alerts/fired_alerts/ | search title="SOC - *" | sort -updated | table title, updated, triggered_alert_count, author     Here are the references I used :  Search endpoint descriptions - Splunk Documentation Using the REST API reference - Splunk Documentation So, how can I retrieve fired alerts in chronological order with a title search ? Or how can I obtain a field indicating the date the alert was triggered ? Thanks in advance.