All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Im seeing errors whereby powershell inputs just stop for some random reason. The only error I get is the following failed with exception=The running command stopped because the preference variable ... See more...
Im seeing errors whereby powershell inputs just stop for some random reason. The only error I get is the following failed with exception=The running command stopped because the preference variable "ErrorActionPreference" or common parameter is set to Stop: Exception calling "Clear" with "0" argument(s): "Index was outside the bounds of the array." A simple restart of the SplunkForwardingService on the system makes it work again so theres nothing wrong with the script. Anyone got any ideas why its doing this? its not just one script or one particular system. Its 2 scripts across 2 different scripts. Same error though
Hi Team! I was under the impression (mistakenly most likely) that if we did not own Splunk SOAR (which we don't) that there would still be a limited amount of SOAR functionality available in Preview ... See more...
Hi Team! I was under the impression (mistakenly most likely) that if we did not own Splunk SOAR (which we don't) that there would still be a limited amount of SOAR functionality available in Preview 2 (or will it be available with the final product?). When I go to Automation > Run Action, I don't seem to be able to run or enable any actions here. Thoughts?
Hello, we are trying to configure the receiving of AppFlow data from Citrix Netscaler, using the Splunk Add-on for Citrix Netscaler and Splunk Stream. Everything seems to work, except that not all ... See more...
Hello, we are trying to configure the receiving of AppFlow data from Citrix Netscaler, using the Splunk Add-on for Citrix Netscaler and Splunk Stream. Everything seems to work, except that not all fields are visible on the search head. We can see "netflow_elements" but probably not "recognized". How can we manage this? Any suggestion? Thanks!
I want to send mail alerts (stats count) including time charts (time chart) to show the increase in delta count over a period of time, but not able to do it.
Hello, we have a Tiers application with 4 nodes corresponding to Tomcat applications. We restart these applications every evening at 22:00. For 2 of 4 nodes, metrics correspond to JMX.ActiveSession... See more...
Hello, we have a Tiers application with 4 nodes corresponding to Tomcat applications. We restart these applications every evening at 22:00. For 2 of 4 nodes, metrics correspond to JMX.ActiveSession is no longer collected after startup until around 10:00 the day after. It is fixed without any action but comes again the day after. 4 Nodes are managed the same with chef deployment. Here are versions of the components : Server Agent #22.5.0.33845 v22.5.0 GA compatible with 4.4.1.0 rd9770530415f19f4c5154a80772b833db8dd7cee release/22.5.0 AppDynamics Controller build 22.10.1-611 Here is a screenshot extracted from Metrics Browser : Thanks for your suggestion and analysis of the potential root-cause.
Hi, I use Splunk Enterprise Security with Threat Intelligence framework. Splunk creates many notables 'Threat Activity Detected' but I'd like to add/remove/edit source types. I have only events... See more...
Hi, I use Splunk Enterprise Security with Threat Intelligence framework. Splunk creates many notables 'Threat Activity Detected' but I'd like to add/remove/edit source types. I have only events with field "orig_sourcetype="apache:access" now. For example I tried add events from firewalls and compare source with suspicious IPs. How can I configure these fields "orig_sourcetype" in Threat Intelligence data model ?
Hello all! I am brand new to Splunk and have learned quite a bit so far from this forum, so thank you! With that being said, I am currently trying to import event logs from another system to scan on ... See more...
Hello all! I am brand new to Splunk and have learned quite a bit so far from this forum, so thank you! With that being said, I am currently trying to import event logs from another system to scan on my local instance of Splunk. I've tried moving the EVTX files into my winevt directory, but that didn't work. I'm getting very frustrated and any help would be appreciated. -BabySplunk
Hi, Trying to learn SPLUNK and I have troubles with timestamp, My XML CODE is like this : <LOG><DATUM>26112022</DATUM><Vrijeme>224516</Vrijeme><CC>6894542532143100</CC><Iznos>46144.46</Iznos></LO... See more...
Hi, Trying to learn SPLUNK and I have troubles with timestamp, My XML CODE is like this : <LOG><DATUM>26112022</DATUM><Vrijeme>224516</Vrijeme><CC>6894542532143100</CC><Iznos>46144.46</Iznos></LOG> I got the date (DATUM) and now im trying to get the time, but my problem is I can't go to next line props file looks like this SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]*)<\w{2,}> TIME_PREFIX = <DATUM> TIME_FORMAT = %d%m%Y</DATUM>\n<Vrijeme>%H%M%S MAX_TIMESTAMP_LOOKAHEAD = 100 instead of "\n" i tried %n, [\r\n\s], and leave it blank, but nothing works, any tips?
I would like to use AppDynamics for IVR application to know user experience, is this support for IVR apps? If yes please share me the reference documentation.
So I'm trying to turn a single value number into a percentage but the code just returns a number still. Here's my code index="keycloak" "MFA" mfa="MFA code issued" OR (mfa="MFA challenge iss... See more...
So I'm trying to turn a single value number into a percentage but the code just returns a number still. Here's my code index="keycloak" "MFA" mfa="MFA code issued" OR (mfa="MFA challenge issued") | stats count AS Total count(eval(mfa="MFA code issued")) AS MFA_code_issued | eval Percentage=round(((MFA_code_issued/Total)*100),2) | table MFA_code_issued Percentage Just to give some context: MFA challenge issued = The number of challenges issued to customers to ask them whether they want to receive a OTP code via SMS or Email MFA code issued = The number of OTP codes that have actually been issued to customers So my colleagues have asked for the percentage of MFA codes issued
hi All, can you help with splunk search to get time only from date time. example as 2022/11/28 17:00:00 want to get only time 17:00
I have the problem that my scheduled searches all have a lifetime of 10 days. This is the case for searches that run once every day but also searches that run every 4 hours. Changing the "Expires" ... See more...
I have the problem that my scheduled searches all have a lifetime of 10 days. This is the case for searches that run once every day but also searches that run every 4 hours. Changing the "Expires" value doesn't affect that 10 days lifetime. How can I change the default lifetime of my scheduled searches?
We have a new Splunk Cloud environment We are using AWS TA Add On to ingest files from S3 The files have extension of ".csv" but they are not really CSV, they are TSV (tab separated values). Th... See more...
We have a new Splunk Cloud environment We are using AWS TA Add On to ingest files from S3 The files have extension of ".csv" but they are not really CSV, they are TSV (tab separated values). The Add-On has built-in functionality to inspect CSV files and tried to convert them to json, therefore it fails to ingest them correctly. Is there any way to turn-off the CSV parsing ability of the S3 input? Source: https://docs.splunk.com/Documentation/AddOns/released/AWS/S3 "The Generic S3 custom data types input processes .csv files according to their suffixes"
Hi Everyone, I have one requirement. Below is my search query to show "no.of users logged in" for every 1 hour. index=ABC sourcetype=xyz "PROFILE_LOGIN" |rex "PROFILE:(?<UserName>\w+)\-" |bin _ti... See more...
Hi Everyone, I have one requirement. Below is my search query to show "no.of users logged in" for every 1 hour. index=ABC sourcetype=xyz "PROFILE_LOGIN" |rex "PROFILE:(?<UserName>\w+)\-" |bin _time span=1h |stats dc(UserName) as No_Of_Users_Logged_In by _time I am getting like below: _time No_Of_Users_Logged_In 2022-11-28 10:00 1 2022-11-28 11:00 2 I want when I click in the first row/timestamp/ No_Of_Users_Logged_In, it should show the raw logs of the events where the logged-in usernames are present in that particular time (if the time stamp is 10:00, then it should show raw events from 10:00 to 11:00). These events should open in new search . Also, can you guide me how to view these in panel below the table using drilldown. It should be only show when we click on the values. (It’s an additional request to know the possibility) Please guide and help me. xml code snippet : <row> <panel> <title>Number of Users Logged In</title> <table> <search> <query>index=ABC sourcetype=xyz "PROFILE_LOGIN" |rex "PROFILE:(?<UserName>\w+)\-" |bin _time span=1h |stats dc(UserName) as No_Of_Users_Logged_In by _time</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">6</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> </table> </panel> </row>
Hi All, Good day. need help on search query to get below scenario. as we have few jobs we need data to calculate sla breach if job was not starting on Actual_start_time and also breach if job is n... See more...
Hi All, Good day. need help on search query to get below scenario. as we have few jobs we need data to calculate sla breach if job was not starting on Actual_start_time and also breach if job is not ended in some threshold of time example 30 seconds.
Hi Friends, I want to convert 2 specific columns to rows and remaining columns should be present. This is my current SPL: | inputlookup PG_WHSE_PrIME_TS | search Server IN ("*") | rename Site... See more...
Hi Friends, I want to convert 2 specific columns to rows and remaining columns should be present. This is my current SPL: | inputlookup PG_WHSE_PrIME_TS | search Server IN ("*") | rename Site_Name as "Site Name" Name as NAME | join type=left max=0 NAME [search index="pg_idx_whse_prod_database" source=Oracle sourcetype=PG_ST_WHSE_DPA_NEW host="*" | stats latest(PERCENTAGE) as PERCENTAGE by DB_ID PARAMETERNAME1 NAME ] |table "Site Name" NAME DB_ID PARAMETERNAME1 PERCENTAGE |sort DB_ID Site Name Name DB_ID PARAMETERNAME1 PERCENTAGE Crailsheim CRL-PRIME-PROD.EU.PG.COM-PROD 24 AUDIT_TBS 81.38 Crailsheim CRL-PRIME-PROD.EU.PG.COM-PROD 24 DLX_DATA_TS 38.24 Crailsheim CRL-PRIME-PROD.EU.PG.COM-PROD 24 DLX_INDEX_TS 99.98 Crailsheim CRL-PRIME-PROD.EU.PG.COM-PROD 24 DPA_TS 95 Bangkok BKK-PRIME-PROD.AP.PG.COM-PROD 38 AUDIT_TBS 62.62 Bangkok BKK-PRIME-PROD.AP.PG.COM-PROD 38 DLX_DATA_TS 75.21 Bangkok BKK-PRIME-PROD.AP.PG.COM-PROD 38 DLX_INDEX_TS 96.24 Bangkok BKK-PRIME-PROD.AP.PG.COM-PROD 38 DPA_TS 84
We found a Splunk app which allows us to take a file and write it to our Splunk server before sending it off to a data store. Due to issues with this app, I downloaded it, modified it, and re-uploade... See more...
We found a Splunk app which allows us to take a file and write it to our Splunk server before sending it off to a data store. Due to issues with this app, I downloaded it, modified it, and re-uploaded it with some slight changes to the codebase. After this change, the new uploaded app does not have permissions to write to the filesystem, and we get this error "Unexpected error: [Errno 30] Read-only file system" when we try and use any of its alerts which write to a file. This did not happen in the original app, which confuses me, as the logic for the file uploading, and the file destination have not changed. These are the relevant bits of code which are throwing an error currently. Anyone know why this might not be working? if not os.path.exists("out"): os.makedirs("out") filename= "out/"+sid+".csv" For context, the only changes were that finaland we are running our instance on Splunk Cloud so do not have direct access to the filesystem to be able to debug why this issue is getting thrown.
Hello, Our company is using Splunk Entreprise 8.2.9 with application configuration/retention in GitLab. We have few concerns when it comes to add-ons configuration and Apps configuration that ... See more...
Hello, Our company is using Splunk Entreprise 8.2.9 with application configuration/retention in GitLab. We have few concerns when it comes to add-ons configuration and Apps configuration that store credentials (username/passwords) in their files (inputs.ini or password.conf) As of now we are using the Web UI to locally configure the addons that will store the configurated credentials in the local directory of the addon in a password.conf file showing ******* for the credentials. The problem of such scenario is when it comes to add-on upgrade the credential file is not read and we have to start again the configuration of the addon. In order to avoid such time waste we'd like to store in a dedicated app the configuration of the addon or to have the needed credentials stored in Apps. Main concern is all our Splunk configuration is stored in GitLab. To avoid data breach we have to encrypt those creds. Is it possible to store encrypted values for credentials in GitLab that Splunk will be able to read easily for example by using pass4symkey or salt ? Or any other encrypted method
Hi All, I have dashboard displaying list of groups asset counts for various business units and recently has some one requested some set of ip ranges need to be excluded. But problem is if am using ... See more...
Hi All, I have dashboard displaying list of groups asset counts for various business units and recently has some one requested some set of ip ranges need to be excluded. But problem is if am using eg NOT (IP="10.0.0.0/8") in my base search this is affecting other group asset count for all other BU as overlap of same subnet range. How can i create search query to make this exclusion for specific group/BU wise, instead of applying for all group/BU. my current search looks something like this, index=something sourcetype=anything (ip="10.0.0.0/8" OR ip="192.168.0.0/16" OR ip="172.16.0.0/12") | eval bu=(network="network_name1", "bu1", network="network_name2", "bu2",network="network_name3", "bu3",network="network_name4", "bu4")| stats dc(ip) by bu Thanks!
Hi Splunkers I currently have one Splunk machine that has two rules at once (a search head and an indexer) and I want to separate each rule from another with its own separate machine. Is there a... See more...
Hi Splunkers I currently have one Splunk machine that has two rules at once (a search head and an indexer) and I want to separate each rule from another with its own separate machine. Is there a way to do such action? if so, what are the steps to do so? Thanks.