All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

So I'm trying to turn a single value number into a percentage but the code just returns a number still. Here's my code index="keycloak" "MFA" mfa="MFA code issued" OR (mfa="MFA challenge iss... See more...
So I'm trying to turn a single value number into a percentage but the code just returns a number still. Here's my code index="keycloak" "MFA" mfa="MFA code issued" OR (mfa="MFA challenge issued") | stats count AS Total count(eval(mfa="MFA code issued")) AS MFA_code_issued | eval Percentage=round(((MFA_code_issued/Total)*100),2) | table MFA_code_issued Percentage Just to give some context: MFA challenge issued = The number of challenges issued to customers to ask them whether they want to receive a OTP code via SMS or Email MFA code issued = The number of OTP codes that have actually been issued to customers So my colleagues have asked for the percentage of MFA codes issued
hi All, can you help with splunk search to get time only from date time. example as 2022/11/28 17:00:00 want to get only time 17:00
I have the problem that my scheduled searches all have a lifetime of 10 days. This is the case for searches that run once every day but also searches that run every 4 hours. Changing the "Expires" ... See more...
I have the problem that my scheduled searches all have a lifetime of 10 days. This is the case for searches that run once every day but also searches that run every 4 hours. Changing the "Expires" value doesn't affect that 10 days lifetime. How can I change the default lifetime of my scheduled searches?
We have a new Splunk Cloud environment We are using AWS TA Add On to ingest files from S3 The files have extension of ".csv" but they are not really CSV, they are TSV (tab separated values). Th... See more...
We have a new Splunk Cloud environment We are using AWS TA Add On to ingest files from S3 The files have extension of ".csv" but they are not really CSV, they are TSV (tab separated values). The Add-On has built-in functionality to inspect CSV files and tried to convert them to json, therefore it fails to ingest them correctly. Is there any way to turn-off the CSV parsing ability of the S3 input? Source: https://docs.splunk.com/Documentation/AddOns/released/AWS/S3 "The Generic S3 custom data types input processes .csv files according to their suffixes"
Hi Everyone, I have one requirement. Below is my search query to show "no.of users logged in" for every 1 hour. index=ABC sourcetype=xyz "PROFILE_LOGIN" |rex "PROFILE:(?<UserName>\w+)\-" |bin _ti... See more...
Hi Everyone, I have one requirement. Below is my search query to show "no.of users logged in" for every 1 hour. index=ABC sourcetype=xyz "PROFILE_LOGIN" |rex "PROFILE:(?<UserName>\w+)\-" |bin _time span=1h |stats dc(UserName) as No_Of_Users_Logged_In by _time I am getting like below: _time No_Of_Users_Logged_In 2022-11-28 10:00 1 2022-11-28 11:00 2 I want when I click in the first row/timestamp/ No_Of_Users_Logged_In, it should show the raw logs of the events where the logged-in usernames are present in that particular time (if the time stamp is 10:00, then it should show raw events from 10:00 to 11:00). These events should open in new search . Also, can you guide me how to view these in panel below the table using drilldown. It should be only show when we click on the values. (It’s an additional request to know the possibility) Please guide and help me. xml code snippet : <row> <panel> <title>Number of Users Logged In</title> <table> <search> <query>index=ABC sourcetype=xyz "PROFILE_LOGIN" |rex "PROFILE:(?<UserName>\w+)\-" |bin _time span=1h |stats dc(UserName) as No_Of_Users_Logged_In by _time</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">6</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> </table> </panel> </row>
Hi All, Good day. need help on search query to get below scenario. as we have few jobs we need data to calculate sla breach if job was not starting on Actual_start_time and also breach if job is n... See more...
Hi All, Good day. need help on search query to get below scenario. as we have few jobs we need data to calculate sla breach if job was not starting on Actual_start_time and also breach if job is not ended in some threshold of time example 30 seconds.
Hi Friends, I want to convert 2 specific columns to rows and remaining columns should be present. This is my current SPL: | inputlookup PG_WHSE_PrIME_TS | search Server IN ("*") | rename Site... See more...
Hi Friends, I want to convert 2 specific columns to rows and remaining columns should be present. This is my current SPL: | inputlookup PG_WHSE_PrIME_TS | search Server IN ("*") | rename Site_Name as "Site Name" Name as NAME | join type=left max=0 NAME [search index="pg_idx_whse_prod_database" source=Oracle sourcetype=PG_ST_WHSE_DPA_NEW host="*" | stats latest(PERCENTAGE) as PERCENTAGE by DB_ID PARAMETERNAME1 NAME ] |table "Site Name" NAME DB_ID PARAMETERNAME1 PERCENTAGE |sort DB_ID Site Name Name DB_ID PARAMETERNAME1 PERCENTAGE Crailsheim CRL-PRIME-PROD.EU.PG.COM-PROD 24 AUDIT_TBS 81.38 Crailsheim CRL-PRIME-PROD.EU.PG.COM-PROD 24 DLX_DATA_TS 38.24 Crailsheim CRL-PRIME-PROD.EU.PG.COM-PROD 24 DLX_INDEX_TS 99.98 Crailsheim CRL-PRIME-PROD.EU.PG.COM-PROD 24 DPA_TS 95 Bangkok BKK-PRIME-PROD.AP.PG.COM-PROD 38 AUDIT_TBS 62.62 Bangkok BKK-PRIME-PROD.AP.PG.COM-PROD 38 DLX_DATA_TS 75.21 Bangkok BKK-PRIME-PROD.AP.PG.COM-PROD 38 DLX_INDEX_TS 96.24 Bangkok BKK-PRIME-PROD.AP.PG.COM-PROD 38 DPA_TS 84
We found a Splunk app which allows us to take a file and write it to our Splunk server before sending it off to a data store. Due to issues with this app, I downloaded it, modified it, and re-uploade... See more...
We found a Splunk app which allows us to take a file and write it to our Splunk server before sending it off to a data store. Due to issues with this app, I downloaded it, modified it, and re-uploaded it with some slight changes to the codebase. After this change, the new uploaded app does not have permissions to write to the filesystem, and we get this error "Unexpected error: [Errno 30] Read-only file system" when we try and use any of its alerts which write to a file. This did not happen in the original app, which confuses me, as the logic for the file uploading, and the file destination have not changed. These are the relevant bits of code which are throwing an error currently. Anyone know why this might not be working? if not os.path.exists("out"): os.makedirs("out") filename= "out/"+sid+".csv" For context, the only changes were that finaland we are running our instance on Splunk Cloud so do not have direct access to the filesystem to be able to debug why this issue is getting thrown.
Hello, Our company is using Splunk Entreprise 8.2.9 with application configuration/retention in GitLab. We have few concerns when it comes to add-ons configuration and Apps configuration that ... See more...
Hello, Our company is using Splunk Entreprise 8.2.9 with application configuration/retention in GitLab. We have few concerns when it comes to add-ons configuration and Apps configuration that store credentials (username/passwords) in their files (inputs.ini or password.conf) As of now we are using the Web UI to locally configure the addons that will store the configurated credentials in the local directory of the addon in a password.conf file showing ******* for the credentials. The problem of such scenario is when it comes to add-on upgrade the credential file is not read and we have to start again the configuration of the addon. In order to avoid such time waste we'd like to store in a dedicated app the configuration of the addon or to have the needed credentials stored in Apps. Main concern is all our Splunk configuration is stored in GitLab. To avoid data breach we have to encrypt those creds. Is it possible to store encrypted values for credentials in GitLab that Splunk will be able to read easily for example by using pass4symkey or salt ? Or any other encrypted method
Hi All, I have dashboard displaying list of groups asset counts for various business units and recently has some one requested some set of ip ranges need to be excluded. But problem is if am using ... See more...
Hi All, I have dashboard displaying list of groups asset counts for various business units and recently has some one requested some set of ip ranges need to be excluded. But problem is if am using eg NOT (IP="10.0.0.0/8") in my base search this is affecting other group asset count for all other BU as overlap of same subnet range. How can i create search query to make this exclusion for specific group/BU wise, instead of applying for all group/BU. my current search looks something like this, index=something sourcetype=anything (ip="10.0.0.0/8" OR ip="192.168.0.0/16" OR ip="172.16.0.0/12") | eval bu=(network="network_name1", "bu1", network="network_name2", "bu2",network="network_name3", "bu3",network="network_name4", "bu4")| stats dc(ip) by bu Thanks!
Hi Splunkers I currently have one Splunk machine that has two rules at once (a search head and an indexer) and I want to separate each rule from another with its own separate machine. Is there a... See more...
Hi Splunkers I currently have one Splunk machine that has two rules at once (a search head and an indexer) and I want to separate each rule from another with its own separate machine. Is there a way to do such action? if so, what are the steps to do so? Thanks.
I am trying to expand couple of fields (locationId, matchRank) using mvexpand. But it only works for shorter duration upto 7days. I want to do this for upto 30 days. My matched_locations json array c... See more...
I am trying to expand couple of fields (locationId, matchRank) using mvexpand. But it only works for shorter duration upto 7days. I want to do this for upto 30 days. My matched_locations json array can have atmost 10 items in it with the presence of both locationId and an associated matchRank in it. We can identify the number of items in matched_locations  by msg.logMessage.numReturnedMatches field. I have thousands of such events. Below is the query which is working for shorter duration of time, but timing out for longer duration. ############################################################# index=app_pcf AND cf_app_name="credit-analytics-api" AND message_type=OUT AND msg.logger=c.m.c.d.MatchesApiDelegateImpl | search "msg.logMessage.numReturnedMatches">0 | eval all_fields = mvzip('msg.logMessage.matched_locations{}.locationId','msg.logMessage.matched_locations{}.matchRank') | mvexpand all_fields | makemv delim="," all_fields | eval LocationId=mvindex(all_fields, 0) | eval rank=mvindex(all_fields,1) | fields LocationId, rank | table LocationId, rank ############################################################# Below is the sample splunk data for one such event. ########################################################### cf_app_name: myApp cf_org_name: myOrg cf_space_name: mySpace job: diego_cell message_type: OUT msg: { application: myApp correlationid: 0.af277368.1669261134.5eb2322 httpmethod: GET level: INFO logMessage: { apiName: Matches apiStatus: Success clientId: oh_HSuoA6jKe0b75gjOIL32gtt1NsygFiutBdALv5b45fe4b error: NA matched_locations: [ { city: PHOENIX countryCode: USA locationId: bef26c03-dc5d-4f16-a3ff-957beea80482 matchRank: 1 merchantName: BIG D FLOORCOVERING SUPPLIES postalCode: 85009-1716 state: AZ streetAddress: 2802 W VIRGINIA AVE } { city: PHOENIX countryCode: USA locationId: ec9b385d-6283-46f4-8c9e-dbbe41e48fcc matchRank: 2 merchantName: BIG D FLOOR COVERING 4 postalCode: 85009 state: AZ streetAddress: 4110 W WASHINGTON ST STE 100 } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } ] numReturnedMatches: 10 } logger: c.m.c.d.MatchesApiDelegateImpl } origin: rep source_instance: 1 source_type: APP/PROC/WEB timestamp: 1669261139716063000 } ###########################################################
For example, a real-time search is being performed in the past 10 minutes window. At this time, data with a timestamp of 15 minutes ago was imported. When the search for _time, there are no hits be... See more...
For example, a real-time search is being performed in the past 10 minutes window. At this time, data with a timestamp of 15 minutes ago was imported. When the search for _time, there are no hits because it does not correspond to the past 10 minutes, but when the search for indextime, there are hits. I would like to know which is the basis of the search in real-time search.  
Got these errors while installing acs-cli in WSL/Ubuntu:   ==> Installing acs from splunk/tap ==> Installing dependencies for splunk/tap/acs: linux-headers@5.15, glibc, gmp, isl, mpfr, libmpc, lz4,... See more...
Got these errors while installing acs-cli in WSL/Ubuntu:   ==> Installing acs from splunk/tap ==> Installing dependencies for splunk/tap/acs: linux-headers@5.15, glibc, gmp, isl, mpfr, libmpc, lz4, xz, zlib, zstd, binutils and gcc ==> Installing splunk/tap/acs dependency: linux-headers@5.15 ==> Pouring linux-headers@5.15--5.15.57.x86_64_linux.bottle.1.tar.gz cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter/xt_connmark.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter/xt_dscp.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter/xt_mark.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter/xt_rateest.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter/xt_tcpmss.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter_ipv4/ipt_ecn.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter_ipv4/ipt_ttl.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter_ipv6/ip6t_hl.h': File exists Error: Failure while executing; `cp -pR /tmp/d20221130-6079-1ef476r/linux-headers@5.15/. /home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15` exited with 1. Here's the output: cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter/xt_connmark.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter/xt_dscp.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter/xt_mark.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter/xt_rateest.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter/xt_tcpmss.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter_ipv4/ipt_ecn.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter_ipv4/ipt_ttl.h': File exists cp: cannot create regular file '/home/linuxbrew/.linuxbrew/Cellar/linux-headers@5.15/5.15.57/include/linux/netfilter_ipv6/ip6t_hl.h': File exists This happened after the "brew install acs" command has gone through 13 downloads of libraries etc. Any help would be much appreciated.  
Hi All, we upgraded our Splunk Enterprise to the latest 9.0.2 and noticed something odd in the About page. Why does the products say "Retention" ?  What does it mean?  Have never come across like th... See more...
Hi All, we upgraded our Splunk Enterprise to the latest 9.0.2 and noticed something odd in the About page. Why does the products say "Retention" ?  What does it mean?  Have never come across like this before    
We upgraded the Splunk Universal Forwarders on our web servers from 8.0.5 to 9.0.1 back in late October and since then we've seen a dramatic increase in CPU Utilization by the Splunkd.exe process on ... See more...
We upgraded the Splunk Universal Forwarders on our web servers from 8.0.5 to 9.0.1 back in late October and since then we've seen a dramatic increase in CPU Utilization by the Splunkd.exe process on each server. Each instance is tracking a fairly large amount of files - typically 3k or so per day and in a folder that can contain up to 10k files. I've found reducing the amount of 'old' files in the folder helps, but the CPU load is still dramatically above what it was with version 8.0.5.
Hi all, I would like to use bin command to make the demo data sets into 10 bins according to Exe_time and list Substage_time along with it. Do anyone have ideas about how to use bin command correct... See more...
Hi all, I would like to use bin command to make the demo data sets into 10 bins according to Exe_time and list Substage_time along with it. Do anyone have ideas about how to use bin command correctly? I use these commands, but the output isn't as my expectation. |bin Exe_time as time_bin bins=10 |stats values(Substage_time) by time_bin Demo data sets are listed below:  Exe_time Substage_time Count 10 8 11 2 21 9 12 2 32 8 1 43 9 19 4 54 9 12 3 65 8 11 6 66 9 19 7 67 8 11 6 70 9 12 6 71 8 11 5 80 7 4 81  9 12 11 95 7 8 108 11 3 220 8 11 5 Thank you.
I am currently attempting to create a table that displays the count of one event from the previous month in comparison to the current month. I'm not quite sure what the best way to do this is but I'v... See more...
I am currently attempting to create a table that displays the count of one event from the previous month in comparison to the current month. I'm not quite sure what the best way to do this is but I've created a search with an appended search and I'm attempting to display this in a table comparing the two results from last month to this month. Essentially what Im trying to achieve is the following: Event Last Month(count) Current Month(count) Event 1 4323 435 Event 2 564 23 Here is my base search so far.. <search 1> earliest=-1mon@mon latest=@mon> | multikv  | stats count by event | eval input_type="Last_Month" | append [<search 2> earliest=@mon latest=now | multikv  | stats count by event | eval input_type="Current_Month"] | Thank you!
I need to configure the Splunk Add-on for Linux in both forwarder and Splunk server machines. I have Splunk Enterprise running on windows and collecting the logs from Linux using Splunk forwarder. ... See more...
I need to configure the Splunk Add-on for Linux in both forwarder and Splunk server machines. I have Splunk Enterprise running on windows and collecting the logs from Linux using Splunk forwarder. To get the dashboard from Splunk add on Linux, Do I need to install the add-on on both machines, if so what steps do I need to follow on a Linux host machine?     Many thanks for considering my request.
Hello, I have use cases to find the Delta between 2 sets of events. We get events once a day, our objective is to find the delta between current events (event received today) and the events we rece... See more...
Hello, I have use cases to find the Delta between 2 sets of events. We get events once a day, our objective is to find the delta between current events (event received today) and the events we received yesterday and create a report based on that delta (events). Any recommendation would be highly appreciated. Thank you so much.