I am working with a time chart panel in a dashboard. This dashboard will have a filter for "hosts". However, this particular sourcetype has a small subset of the servers included in the filter. I ...
See more...
I am working with a time chart panel in a dashboard. This dashboard will have a filter for "hosts". However, this particular sourcetype has a small subset of the servers included in the filter. I would like this panel to display a blank chart if the filtered server is not part of the values for the dashboards other values. This is what I have written so far: <base search> | search host IN ($host$) | timechart max(users) by host usenull=f useother=f limit=6 | addtotals | table _time, host | fillnull host What happens here is that only the blank time chart (even when the correct server is selected).
Hey All, I am wondering how you can make a search in Splunk, and then send the data it returns to a custom python command for further processing. For example the search without the custom command: ...
See more...
Hey All, I am wondering how you can make a search in Splunk, and then send the data it returns to a custom python command for further processing. For example the search without the custom command: "source="C:\\Documents\\Logs.csv" index="logs" sourcetype="csv" | stats count as alertCount by Alert | stats count(alertCount)" This will return to me the number of types of alerts in this CSV I have. But I want to take that number (which is stored in "alertCount") and send it to my custom command as a parameter in order to send that number to an outbound-external API via REST. Ultimately my search would look something like this "source="C:\\Documents\\Logs.csv" index="logs" sourcetype="csv" | stats count as alertCount by Alert | stats count(alertCount) | splunkcommand num=alertCount" "splunkcommand" is my custom python script that takes in a parameter "num" and sends it to an API via REST. However, Splunk tells me that "splunkcommand" needs to be the first command in the search, making what I am trying to do not possible, because I want to make the SPL search first, to send it to the custom command. Is what I am trying to achieve possible?
Good afternoon I have a question about identifying the type of environment the servers are in by their hostnames being extracted using the Microsoft-add on for splunk. The server hostnames are bein...
See more...
Good afternoon I have a question about identifying the type of environment the servers are in by their hostnames being extracted using the Microsoft-add on for splunk. The server hostnames are being indexed as follows: servername"DV" servername"TV" Servername"DV"serverName Servername"TV"servername So server names that have the DV and TV designations are identified as belonging to the development and test environments. Sometimes the characters are at the end and sometimes they are in the middle of the server names. I am looking at running a search that will identify the hostname as being in the development or test environments and adding that as a column to the search results fro the hostnames. If the hostname does not have those designations I would like to identify them as "other" I would appreciate any guidance of the best approach to use for the search string, that way I can research it and learn how to do it. Thank you Dan
hi splunk, i have a splunk account and trialing our company splunk server installation (not cloud) i forgot the account password, and tried to reset it on your web portal, no success (no email sent...
See more...
hi splunk, i have a splunk account and trialing our company splunk server installation (not cloud) i forgot the account password, and tried to reset it on your web portal, no success (no email sent to proceed resetting) thanks in advance, Kristian Vuorinen
Hello In setting up the add on for AWS(4.6.1) in the IAM role setup it expects a role ARNin the format of : arn:aws-us-gov:iam::12345566789:role/XXX_GuardDuty_S3_XXX But the ARN I got from AWS is...
See more...
Hello In setting up the add on for AWS(4.6.1) in the IAM role setup it expects a role ARNin the format of : arn:aws-us-gov:iam::12345566789:role/XXX_GuardDuty_S3_XXX But the ARN I got from AWS is formatted like this: arn:aws-us-gov.iam:12345566789:role/XXX_GuardDuty_S3_XXX Notice the difference between "gov.iam" to "gov:iam" and "iam:123" to "iam::123" I had to change the ARN to be able to get the selection added into the app BUT I assume it wont work since the ARN isnt correct since the format isnt right. Any ideas on how to work around it?
I am a splunk cloud customer with an onprem heavy forwarder. I have an app on the heavy forwarder which I want to configure for a new INDEX. Let me know if I am missing anything: I go into the hea...
See more...
I am a splunk cloud customer with an onprem heavy forwarder. I have an app on the heavy forwarder which I want to configure for a new INDEX. Let me know if I am missing anything: I go into the heavy forwarder and create a new INDEX. Let's say it's called "office" Then I go into splunk cloud and create the index with the exact same name "office" Is that all I need to do?
Hello - I have the following search: <base search> | fields host registrations | stats latest(registrations) by host This produces the following table: host latest(registrations)...
See more...
Hello - I have the following search: <base search> | fields host registrations | stats latest(registrations) by host This produces the following table: host latest(registrations) Pc1 51 Pc2 29 Pc3 18 How would I add the values of latest(registrations) to provide a single value for all 3 hosts? For example, I would like only the sum of the latest registrations (98) to display in a single value panel. Thank you!
Hi all, we have a Splunk Enterprise clustered environment, with a cluster of 3 search heads. For many reasons, a lookup file is updated once a day in only one of these search heads (the first one)....
See more...
Hi all, we have a Splunk Enterprise clustered environment, with a cluster of 3 search heads. For many reasons, a lookup file is updated once a day in only one of these search heads (the first one). To update this lookup file also in the other two search heads, we set up a scheduled search with the following string: | inputlookup my_lookup_table.csv
| outputlookup my_lookup_table.csv Since if this search is run from a different search head than the number one the lookup is not updated, is it possible to run it always from the same search head? I know we could send the lookup via SFTP to the other search heads servers, but if possible we'd like to avoid it. Thanks in advance.
Hello, Splunk version is 8.0.6. trying to configure searchHead to connect to deployer. running following command but getting an error. sudo /opt/splunk/bin/splunk init shcluster-config -auth admin:...
See more...
Hello, Splunk version is 8.0.6. trying to configure searchHead to connect to deployer. running following command but getting an error. sudo /opt/splunk/bin/splunk init shcluster-config -auth admin:password1 -mgmt_uri https://10.31.0.28:8089 -replication_port 9000 -replication_factor 3 -conf_deploy_fetch_url http://10.31.0.33:8089 -secret password1 -shcluster_label stg-shcluster1 Can't write file "/root/.splunk/authToken_hostname1_8089": Permission denied splunk is running as a splunk user and have configured boot-start in systemd Note: 1. I have read previous posts about similar error but their case is different than mine, i am not starting or have not started splunk as root user. 2. I have already added /opt/splunk/bin/splunk command in /etc/sudoers file to allow splunk user. any suggestions? regards, SR
We have a Splunk cloud instance and I'm trying to connect to Mulesoft. Add below entries and when I try to deploy, it's throwing below error in console.
Error:[ERROR] Failed ...
See more...
We have a Splunk cloud instance and I'm trying to connect to Mulesoft. Add below entries and when I try to deploy, it's throwing below error in console.
Error:[ERROR] Failed to execute goal on project splunk_poc: Could not resolve dependencies for project com.mycompany:splunk_poc:mule-application:1.0.0-SNAPSHOT: Failed to collect dependencies at com.splunk.logging:splunk-library-javalogging:jar:1.7.3: Failed to read artifact descriptor for com.splunk.logging:splunk-library-javalogging:jar:1.7.3: Could not transfer artifact com.splunk.logging:splunk-library-javalogging:pom:1.7.3 from/to ext-release-local (http://splunk.artifactoryonline.com/splunk/ext-releases-local:( Permanent Redirect (308) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
Added below appender in log4j file
<Http name="Splunk"
url="https://host:8088/services/collector/raw">
<Property name="Authorization" value="my token" />
<PatternLayout pattern="%-5p %d [%t] %X{correlationId}%c: %m%n" />
</Http>
Added below entries in pom.xml file
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.10.0</version>
</dependency>
<dependency>
<groupId>com.splunk.logging</groupId>
<artifactId>splunk-library-javalogging</artifactId>
<version>1.7.3</version>
</dependency>
<repository>
<id>ext-release-local</id>
<url>http://splunk.artifactoryonline.com/splunk/ext-releases-local</url>
</repository>
My query "mwt-service" my query |stats count by channel service date_month yields result like channel service month count PBX FNTF november 4 STE ACTR november 5 PBX FNTF o...
See more...
My query "mwt-service" my query |stats count by channel service date_month yields result like channel service month count PBX FNTF november 4 STE ACTR november 5 PBX FNTF october 6 STE ACTR october 9 But I want to have two columns of each month count as below channel service nov oct PBX FNTF 4 5 STE ACTR 3 8 please advice.
i want to pass default value as * for the token $VALUES$, $COUNTS$ in drilldown , and on clicks on the table field value , the token value should be passed. Please help me with the token value handl...
See more...
i want to pass default value as * for the token $VALUES$, $COUNTS$ in drilldown , and on clicks on the table field value , the token value should be passed. Please help me with the token value handling. <row>
<panel>
<viz type="treemap_app.treemap">
<title>Top 15 Result</title>
<search base="health_rules">
<query>|stats count AS "Result Count" BY count value |head 15</query>
</search>
<drilldown>
<set token="VALUES">$row.value$</set>
<set token="COUNTS">$row.count$</set>
</drilldown>
</viz>
</panel>
</row>
Hi, Can we create or generate an audit report for agent status? for example I wanted to get a report triggered on what agents are reporting and what are disconnected in my Env's?
Hi All, I have a requirement to send alert result to Cortex ( Third Party System ). I am thinking of running a custom script on alert action. Can anyone suggest how should I proceed further.
String of variable alert_type: |detail.action=blocked|detail.devicename=hd03|detail.virus=fec_virus_macro_sic_1|detail.sha256=fd8a5b3ea9e59d3f863822cd2dddfbfded034f8ddad351c909732f18b1a82662|detai...
See more...
String of variable alert_type: |detail.action=blocked|detail.devicename=hd03|detail.virus=fec_virus_macro_sic_1|detail.sha256=fd8a5b3ea9e59d3f863822cd2dddfbfded034f8ddad351c909732f18b1a82662|detail.md5=fecd3f3d9a9233c234bf0b455f73f65b Objective: Split the string by "|" and remove the "=" and followed characters. Something like rstrip command however it wasn't working with multivalue. E.g. detail.action detail.devicename detail.virus detail.sha256 detail.md5 It seems I need to used the foreach command however I'm not sure how i will used it.
Hello I use the search below in order to calculate a volume percentage | inputlookup host.csv
| lookup lookup_patch "Computer" as host output FileName, StateName
| search StateName="Non-Comp...
See more...
Hello I use the search below in order to calculate a volume percentage | inputlookup host.csv
| lookup lookup_patch "Computer" as host output FileName, StateName
| search StateName="Non-Compl"
| stats dc(host) as host by StateName FileName
| stats sum(host) as NbNonCompliantPatchesIndHost
| appendcols
[| inputlookup host.csv
| stats dc(host) as NbIndHost]
| eval Perc=round((NbNonCompliantPatchesIndHost/NbIndHost)*100,2)
| table Perc, NbIndHost, NbNonCompliantPatchesIndHost No I need to calculate the volume percentage by SITE So I add a clause "by SITE" in my stats command but it's wrong because sometimes the percentage is over 100% because NbNonCompliantPatchesIndHost is > NbIndHost What is the solution to do this please? | inputlookup host.csv
| lookup lookup_patch "Computer" as host output FileName, StateName
| lookup fo_all HOSTNAME as host output SITE
| search StateName="Non-Compl"
| stats dc(host) as host by StateName FileName SITE
| stats sum(host) as NbNonCompliantPatchesIndHost by SITE
| appendcols
[| inputlookup host.csv
| lookup fo_all HOSTNAME as host output SITE
| stats dc(host) as NbIndHost by SITE]
| eval Perc=round((NbNonCompliantPatchesIndHost/NbIndHost)*100,2)
| table SITE Perc, NbIndHost, NbNonCompliantPatchesIndHost
Hi As you can see in my XML I use a dropdown list which is feeded from a csv file I would like to be able to feed this dropdown list from the stats command there is in my search (stats last(RESPONS...
See more...
Hi As you can see in my XML I use a dropdown list which is feeded from a csv file I would like to be able to feed this dropdown list from the stats command there is in my search (stats last(RESPONSIBLE_USER) as "Responsible") in order to have just the "Responsible" items corresponding to my search How to do this please? <form stylesheet="format.css">
<label>Battery</label>...<fieldset submitButton="true">
<input type="dropdown" token="tok_filterresponsible" searchWhenChanged="true">
<label>Responsible</label>
<choice value="*">*</choice>
<initialValue>*</initialValue>
<default>*</default>
<fieldForLabel>RESPONSIBLE_USER</fieldForLabel>
<fieldForValue>RESPONSIBLE_USER</fieldForValue>
<search>
<query>| inputlookup responsible.csv</query>
</search>
</input>
</fieldset>
<row>
<panel>
<table>
<title>/title>
<search>
<query>| inputlookup fo_all
| rename HOSTNAME as host
| lookup lookup_pana"name0" as host OUTPUT BatteryTemp0 BatteryModel0 CycleCount0 HealthState0 LastRecalibration0 ManufactureDate0 DesignCapacity0
| lookup lookup_cmdb_fo_all HOSTNAME as host output SITE RESPONSIBLE_USER DEPARTMENT
| search RESPONSIBLE_USER=$tok_filterresponsible|s$
| stats last(RESPONSIBLE_USER) as "Responsible", last(DEPARTMENT) as Department, last(SITE) as Site, last(BatteryModel0) as "Battery model", last(DesignCapacity0) as "Design capacity (mAH)", last(HealthState0) as "Health state (%)", last(CycleCount0) as "Cycle count",
last(ManufactureDate0) as "Manufacture date", last(LastRecalibration0) as "Last recalibration" by host
| rename host as Hostname As
I have a JSON file with .json extension which has a complete one line unstructured json. any events gets added to the json array with the same one line json every 5 minutes.
Gone through multiple r...
See more...
I have a JSON file with .json extension which has a complete one line unstructured json. any events gets added to the json array with the same one line json every 5 minutes.
Gone through multiple responses related to duplicate events for JSON, this is what my configurations looks both on search head and indexer props.conf , but still I can see duplicate events when searching on search head
[dell:boomi:atom]
LINE_BREAKER=(\},)
MUST_BREAK_AFTER=([\},])
SHOULD_LINEMERGE=false
SEDCMD-remove_header=s/({"jmx":\[)//g
SEDCMD-remove_footer=s/(}]})//g
INDEXED_EXTRACTIONS = JSON
KV_MODE = none
AUTO_KV_JSON = false
TIME_PREFIX={"(?=\d+-\d+-\d+T)
TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%3N
MAX_TIMESTAMP_LOOKAHEAD=24
TRUNCATE = 0
Hello, Currently I am ongoing with Oracle db 12c integration with Splunk, actually I don't know what the needed audit file the splunk will need to in this integration. Any suggestions ? BR, Haytham