All Topics

Top

All Topics

How Can i just get the message alert in mail showing only the  failed job example "Job=[ADM-FILENET-DLY]] " instead of the complete log.   Note: The Job names are dynamic    My Current Alert Quer... See more...
How Can i just get the message alert in mail showing only the  failed job example "Job=[ADM-FILENET-DLY]] " instead of the complete log.   Note: The Job names are dynamic    My Current Alert Query : index=* host=*MYhost* "*IN-RCMCO-DLY*" OR "*ADJ-RECERT-DLY*" OR *AD*-*Y*" FAILED job_status2=FAILED OR status=FAILED OR status1=FAILED OR ExitCode=FAILED | rex field=_raw ".*status:\s\[(?P<status1>\S+)\]" | rex field=_raw "JOB\s(?P<job_status2>\w+)" |rex field=_raw "(exitCode=)(?<ExitCode>\w+)" | eval _raw=substr(_raw, 1, 1500) | table _time job_status2 status1 status ExitCode _raw   log  22-08-28 18:01:31,323 INFO [main] c.l.b.listener.JobCompletionListener: :::::::::::::::BATCH JOB FAILED:::::::::::JobExecution: id=21099, version=1, startTime=Sun Aug 28 18:01:29 CDT 2022, endTime=Sun Aug 28 18:01:31 CDT 2022, lastUpdated=Sun Aug 28 18:01:29 CDT 2022, status=FAILED, exitStatus=exitCode=FAILED;exitDescription=com.ltss.fw.exception.ApplicationException: Error occured while processing appDocument: In catch block, exception stackTrace,job=[JobInstance: id=21099, version=0, Job=[ADM-FILENET-DLY]], jobParameters=[{chunkSize=null, skipLimit=null, commitInterval=null, time=1661727689449, asOfDate=1661662800000}]
How can i rename the value of the policy name from = to "contains".  Instead of saying "index=tenable* sourcetype="*" policyName="*" | eval policyName=if(policyName="93e1da98-656c-5cd5-933b-ce6665fc... See more...
How can i rename the value of the policy name from = to "contains".  Instead of saying "index=tenable* sourcetype="*" policyName="*" | eval policyName=if(policyName="93e1da98-656c-5cd5-933b-ce6665fc0486-1948841/CIS PostgreSQL 11 (20210915)","PostgreSQL",policyName) "   I would like to say "if(policyName=*CIS PostgreSQL* it doesn't work
Just came across an interesting use case, and I'm wondering how people solve it.  Phantom talks to an internal asset via HTTP and API key. This asset has redundancy, and if it goes down a backup... See more...
Just came across an interesting use case, and I'm wondering how people solve it.  Phantom talks to an internal asset via HTTP and API key. This asset has redundancy, and if it goes down a backup comes online. Part of that is name re-direction. The data underneath is all the same but the API key changes.  My thought would be to perform a test connectivity check at the top of the playbook, and then pass the asset number down the playbook.  Is there a smarter way to handle this?  Thanks!
we have configured our server to send syslog log events to our SPLUNK collectors using syslog UDP port 514 we are not seeing the hostname listed in the ingested files. how do we get SPLUNK to displ... See more...
we have configured our server to send syslog log events to our SPLUNK collectors using syslog UDP port 514 we are not seeing the hostname listed in the ingested files. how do we get SPLUNK to display the hostname? thank you Angel
We have Monitoring of Java Virtual Machines with JMX setup on our Splunk forwarder (linux), and it's running fine when executed "./splunk start" from splunk forwarder bin with below logs. 08-29-202... See more...
We have Monitoring of Java Virtual Machines with JMX setup on our Splunk forwarder (linux), and it's running fine when executed "./splunk start" from splunk forwarder bin with below logs. 08-29-2022 09:33:57.733 -0600 INFO SpecFiles - Found external scheme definition for stanza="jmx://" from spec file="/opt/splunkforwarder/etc/apps/SPLUNK4JMX/README/inputs.conf.spec" with parameters="activation_key, config_file, config_file_dir, polling_frequency, additional_jvm_propertys, output_type, hec_port, hec_host, hec_endpoint, hec_poolsize, hec_token, hec_https, hec_batch_mode, hec_max_batch_size_bytes, hec_max_batch_size_events, hec_max_inactive_time_before_batch_flush, log_level"   However,  when I tried to start Splunk agent as a service with sudo service splunk start, everything else started fine, and I'm getting the following error in splunkd.log 08-29-2022 09:46:16.519 -0600 ERROR ModularInputs - Introspecting scheme=jmx: Unable to run "python3.7 /opt/splunkforwarder/etc/apps/SPLUNK4JMX/bin/jmx.py --scheme": child failed to start: No such file or directory 08-29-2022 09:46:16.542 -0600 ERROR ModularInputs - Unable to initialize modular input "jmx" defined in the app "SPLUNK4JMX": Introspecting scheme=jmx: Unable to run "python3.7 /opt/splunkforwarder/etc/apps/SPLUNK4JMX/bin/jmx.py --scheme": child failed to start: No such file or directory. Anyone can point me in the right direction? I've setup Splunk as a service with sudo ./splunk enable boot-start -user splunkuser I'm suspecting there is a mismatch in permission between splunkuser (splunk owner) and root, but not sure where I should go to correct that.  
Hi, Is there a way to authenticate to the API through SAML? right now, our security policy prohibits the use of local unmanaged accounts. I have SAML authentication with Azure AD configured for w... See more...
Hi, Is there a way to authenticate to the API through SAML? right now, our security policy prohibits the use of local unmanaged accounts. I have SAML authentication with Azure AD configured for web access, but when I try to use those same AD credentials to authenticate to the API it does not work. Please help with steps for configuring Azure AD to work with REST API in Splunk.  
Hello, i have to decommission a site due to datacenter dismission. Actually we have four sites with 10 indexers each. The  site decommission is well documented, what is not clear is how the map of ... See more...
Hello, i have to decommission a site due to datacenter dismission. Actually we have four sites with 10 indexers each. The  site decommission is well documented, what is not clear is how the map of decommissioned site originating data is replicated to the remaining site, using: site_mappings = site4:site2 originating data from site4 is replicated to site2, suppose there are 20TB of data, how many data every indexers on site2 receive ? Is there a sort af balancing (2TB each) or is not  predictable ? Is also not clear if the replication bucket for the dismissed site are removed by Splunk when the cluster master is restarted or can be do manually. I need this information to estimate if the actual size of file system is enough. Thanks  
Hi, How can we extract a list of open episodes in splunk itsi.Please  Thanks!
Hello, I have question about pipeline parallelization. From docu and other sources I find that is safe enable pipeline parallelization if I have plenty of free resources in Splunk deployment, parti... See more...
Hello, I have question about pipeline parallelization. From docu and other sources I find that is safe enable pipeline parallelization if I have plenty of free resources in Splunk deployment, particularly CPU cores. In other words, if CPU on indexers or heavy forwarders are "underutilized". But, my question is - what does it mean "underutilized" in numbers? Especially in distributed environment. Example: lets imagine I have IDX cluster. 8 nodes, 16 CPU cores each. I see in Monitoring console )historical charts) average CPU load 40%, median CPU load 40% and maximum CPU load between 70 - 100%. My opinion is it is not safe to enable parallelization in this environment, OK? But when it is safe - if maximum load is under 50% Or 25%? What factors I should take into calculations and what numbers are "safe"? Could you please share your experience or point me to some available guide? Thank you very much in advance. Best regards Lukas Mecir
In one of our dashboard we have a table with a custom action, When the user clicks on a field we check if it is the delete field and if so get the name of the field we want to delete. We can put it... See more...
In one of our dashboard we have a table with a custom action, When the user clicks on a field we check if it is the delete field and if so get the name of the field we want to delete. We can put it in a javascript variable. We also have a search that needs to use this variable. Something like: where someVariable is update in a function.   var someVariable = "" var validateChannelCanBeDeletedSearch = new SearchManager({  id: "validate something",  autostart: false,  search: `| inputlookup some | search some_field="${someVariable}"` });  Later we manually trigger the search. The problem is that the update value of someVariable is not used in the query. How can we make it use the updated value.
Hey there! I try do write some code which will interact with the Splunk REST API. I use the Splunk FREE edition version 8.2.3.3. Unfortunately I cannot get any response from port 8089:   ``` ... See more...
Hey there! I try do write some code which will interact with the Splunk REST API. I use the Splunk FREE edition version 8.2.3.3. Unfortunately I cannot get any response from port 8089:   ``` $ curl https://localhost:8089/services/search/jobs/ curl: (28) Operation timed out after 300523 milliseconds with 0 out of 0 bytes received ```   The URI does not matter. I cannot get any reaction whatsoever. Is this a known limitation? Or do I need to configure something? Thanks a lot for suggestions!
Hi,I have one query that we need to submit node downtime duration report based on node monthly.Every month how much time that node down and how much time it is up.Please help me with the query.Please... See more...
Hi,I have one query that we need to submit node downtime duration report based on node monthly.Every month how much time that node down and how much time it is up.Please help me with the query.Please find the sample log(100 is up ,200 is down) 08/29/2022 10:05:00 +0000,host="0.0.1.1:NodeUp",alert_value="100"              08/29/2022 10:05:00 +0000,host="0.1.1.1:NodeUp",alert_value="100" 08/29/2022 10:00:00 +0000,host="0.0.1.1:NodeDown",alert_value="200" 08/23/2022 10:10:00 +0000,host="0.0.1.1:NodeUp",alert_value="100"  08/23/2022 09:55:00 +0000,host="0.0.1.1:NodeDown",alert_value="200" Example:If node down for 30 min overall in a month different dates.still we need to display hostname along with dowtime(i.e 30min) and remaining uptime duration in one row Note:Every 5min our Saved search will run and show this log data like above so that time stamp is will be every 5min
Hello community, I have a problem with a search that does not return a result. For the purposes of a dashboard, I need one of my searches, when it does not return a result, to display 0. I have al... See more...
Hello community, I have a problem with a search that does not return a result. For the purposes of a dashboard, I need one of my searches, when it does not return a result, to display 0. I have already succeeded in this modification in some somewhat complex searches but for a fairly simple search, I cannot do it. Here is the example in question: Note that when I have a result, it is displayed well, my search runs correctly. I attempted to use the command "| eval ACKED = if(isnull(ACKED) OR len(ACKED)==0, "0", ACKED)" but search doesn't seem to read it:   I found several topics on similar subjects (with the use of fillnull for example) but without result :   I think it's not complicated but I can't put my finger on what's the problem, do you have any idea? Best regards, Rajaion
hello I have a strange behavior with an eval command if I am doing this it works well     | eval site=case(site=="0", "AA", site=="BR", "BB", site=="PER", "CC", 1==1,site) | eval s=lower(s... See more...
hello I have a strange behavior with an eval command if I am doing this it works well     | eval site=case(site=="0", "AA", site=="BR", "BB", site=="PER", "CC", 1==1,site) | eval s=lower(s) |search site="$site$"      but if I put | search site="$site$" just after the eval, the search command is not recognized as a splunk command!     | eval site=case(site=="0", "AA", site=="BR", "BB", site=="PER", "CC", 1==1,site) |search site="$site$"      what is wrong please?
Hi Team, I have NFR license, want to install the ITSI. was trying to install the app, in the process it routed to Splunk base and my splunk account authorization denied what to do some one please h... See more...
Hi Team, I have NFR license, want to install the ITSI. was trying to install the app, in the process it routed to Splunk base and my splunk account authorization denied what to do some one please help?
hello In a first dashboard, I have a dropdown list     <input type="dropdown" token="site" searchWhenChanged="true"> <label>Espace</label> <fieldForLabel>site</fieldForLabel> ... See more...
hello In a first dashboard, I have a dropdown list     <input type="dropdown" token="site" searchWhenChanged="true"> <label>Espace</label> <fieldForLabel>site</fieldForLabel> <fieldForValue>site</fieldForValue> <search>     so when I chosse a site value, the dashboard is updated with the selected site now I want to drilldown on another dashboard from the selected site like this     <link target="_blank">/app/spl_pu/test?form.$site$=$click.value$</link>     In the second dashboard, i try to call the token like this but it doesnt works     | search site="$site$"     could you help please?
Hi, We have a requirement to install the Splunk add on for sql server. We are using Splunk cloud with classic experience. Where all do we need to install this add on? is it sufficient to instal... See more...
Hi, We have a requirement to install the Splunk add on for sql server. We are using Splunk cloud with classic experience. Where all do we need to install this add on? is it sufficient to install on the search head? Or it has to be installed on the heavy forwarder also? Please clarify. Docs suggest to install on the search head only as the below table. Splunk instance type Supported Required Comments Search Heads Yes Yes Install this add-on to all search heads where Microsoft SQL Server knowledge management is required. Indexers Yes No Not required, because this add-on does not include any index-time operations. Heavy Forwarders Yes No To collect dynamic management view data, trace logs, and audit logs, you must use Splunk DB Connect on a search head or heavy forwarder. The remaining data types support using a universal or light forwarder installed directly on the machines running MS SQL Server. Universal Forwarders Yes No To collect dynamic management view data, trace logs, and audit logs, you must use Splunk DB Connect on a search head or heavy forwarder. The remaining data types support file monitoring using a universal or light forwarder installed directly on the machines running MS SQL Server.
I can across a bug for this app: https://splunkbase.splunk.com/app/6553/ and though I'd share. The log types logs and users work fine. But with apps and groups it's configure to get "enrichment dat... See more...
I can across a bug for this app: https://splunkbase.splunk.com/app/6553/ and though I'd share. The log types logs and users work fine. But with apps and groups it's configure to get "enrichment data", this fails if you need to use a proxy. After a bit of trouble shooting I found the on line 243 in okta_utils.py there is no proxy in the request call. I updated it to the following and it works:   Before: r = requests.request("GET", url, headers=headers) After: r = requests.request("GET", url, headers=headers,proxies=proxies,timeout=reqTimeout)   I also had to add these lines to grab those settings, I added them just before the if statement: # Get Proxy settings proxies = get_proxy_settings(self.session_key, self.logger) # set RequestTimeout to 90sec reqTimeout = float(90)
Hello , I have data like below. I need to frame a query such that I can calculate number of desync for each rate-parity-group.   For example: "rate-parity-group":{"CN":{"avail":11,"price... See more...
Hello , I have data like below. I need to frame a query such that I can calculate number of desync for each rate-parity-group.   For example: "rate-parity-group":{"CN":{"avail":11,"price":11}}} rate-parity-group":{"CK":{"avail":18,"price":0},"CL":{"avail":36,"price":0},"CM":{"avail":18,"price":0}}}, "rate-parity-group":{"CL":{"avail":18,"price":0},"CM":{"avail":36,"price":0}}} Expected outcome  rate-parity-group  total-desync CL                                        54(36+18) CM                                      54 CK                                       18   Since CK,CM,CL all these rate-parity-group is dynamic so I m facing problem.  Could someone help me to get the desync count at rate-parity-group. Sample data attached in screenshot.   Thanks in Advance  
<input type="multiselect" token="product_token" searchWhenChanged="true"> <label>Product types</label> <choice value="*">All</choice> <default>*</default> <prefix>(</prefix> <suffix>)</suffix>... See more...
<input type="multiselect" token="product_token" searchWhenChanged="true"> <label>Product types</label> <choice value="*">All</choice> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <initialValue>*</initialValue> <valuePrefix>DB_Product="*</valuePrefix> <valueSuffix>*"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>DB_Product</fieldForLabel> <fieldForValue>DB_Product</fieldForValue> <search base="base_search_Products"> <query>|dedup DB_Product | table DB_Product</query> </search> </input>   This is my input multi select , thorugh which user select product Types example - All /A,B,C,D etc I need to count, How many Product types are selcted by user . This info i need for further processing.