All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Please help, I want to get the xaxis values in a bar chart. In the image attached, i have a query which doesnot have transpose command where I can see the values appearing for xaxis then whe... See more...
Hi, Please help, I want to get the xaxis values in a bar chart. In the image attached, i have a query which doesnot have transpose command where I can see the values appearing for xaxis then when i tried to change the color for each bar using transpose command then suddenly the xaxis values does not appear. In the query i m displaying top highest 10 values alt text untitled.png (72.4 kB)
I want to get the result of large epoch time to hours minutes and seconds. Ex: Epoch time : 9386717.000000 Hours minutes seconds: 2607:25:17 How can I get the result? Thanks in Adva... See more...
I want to get the result of large epoch time to hours minutes and seconds. Ex: Epoch time : 9386717.000000 Hours minutes seconds: 2607:25:17 How can I get the result? Thanks in Advance.
Hi, I have licenses in Splunk that expire on a given date. I want to renew them but only 80% of them with a renewed date of one year ahead, and 20% with expiration date of only a few months ahead. I... See more...
Hi, I have licenses in Splunk that expire on a given date. I want to renew them but only 80% of them with a renewed date of one year ahead, and 20% with expiration date of only a few months ahead. I/e/ extend that only for a few months. Is that possible? Example: Currently I have 80GB license expires May 31 2020 20GB license expires May 31 2020 I want to renew as follows: 80GB license expiring May 30 2021 20Gb license expiring Jul 31 2020. After that date, I will only have 80GB remaining Is that possible?
Hello everyone, current scenario: Reports run every 15 minutes. The output are charts. We take screenshot of those reports and manually send email every one hour with screen shots from past hour. F... See more...
Hello everyone, current scenario: Reports run every 15 minutes. The output are charts. We take screenshot of those reports and manually send email every one hour with screen shots from past hour. For instance at 10am, we send email with 4 screenshots from 9:00, 9:15, 9:30, 9:45. My manager wants me to automate this. The reports from past hour should be saved and sent in a single email every hour. Please let me know if and how this can be achieved. Thanks in advance.
Using Splunk Cloud - After adding first http event collector the status shows disabled, actions do not show a enable link, a greyed out disabled link, monitoring shows no connection attempts, I tri... See more...
Using Splunk Cloud - After adding first http event collector the status shows disabled, actions do not show a enable link, a greyed out disabled link, monitoring shows no connection attempts, I tried a manual test but I think I have the endpoint hostname incorrect ? PS C:\Users\eagle> Invoke-WebRequest -Uri "https://http-inputs-xxxx.splunkcloud.com:443/services/collector/event" -Headers @{'Authorization' = 'Splunk xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx'} Invoke-WebRequest : The remote name could not be resolved: 'http-inputs-xxxx.splunkcloud.com'
Hello Splunkers, First of all, than you all for such great community. I have a question. I am running a query in which I am using appendcols to append the results of a subsearch to my initial s... See more...
Hello Splunkers, First of all, than you all for such great community. I have a question. I am running a query in which I am using appendcols to append the results of a subsearch to my initial search. I am doing this because I am managing large datasets and I want to avoid using the JOIN command. My query is the following one: index=active_directory (source="ACTIVE_DIRECTORY") | dedup NUUMA | eval NUUMA=tostring(upper(NUUMA)) | table NUUMA DISPLAYNAME UserAcControl | appendcols [search index=active_directory source="APP1" | dedup USERNAME | fields USERNAME UserAcControl |eval NUUMA=tostring(upper(USERNAME)) | fillnull value=NULL UserAcControl] | stats values(UserAcControl) count by NUUMA I am getting the results that I need, but after the STATS command, I need to select the UserAcControl attribute with NULL values. I have tried doing something like this, but it is not working: …| stats values(UserAcControl) count by NUUMA | where isnull(UserAcControl) I am attaching a screenshot showing the the values that I want to capture. Any thoughts?? Thank you!!
Created Splunk Enterprise Security Online Sandbox. pre-populated data is not visible on instance. Even Support page is broken : https://www.splunk.com/en_us/talk-to-sales.html?utm_medium=email&ut... See more...
Created Splunk Enterprise Security Online Sandbox. pre-populated data is not visible on instance. Even Support page is broken : https://www.splunk.com/en_us/talk-to-sales.html?utm_medium=email&utm_source=nurture&utm_campaign=GLOBAL_CloudTrial_Nurture_Email2_Feb20&utm_content=_Contact_Us_&elqTrackId=FA80755A6B4D94BDB2205E23A582F01F&elq=7a49bd46a06243b39d899dc0a039df7b&elqaid=26514&elqat=1&elqCampaignId=
Hi all, I am stuck in a very unfortunate condition. I am developing a modular input (using Splunk Add-on builder) that takes two input parameters, a Base URL and an API key. I am interested in val... See more...
Hi all, I am stuck in a very unfortunate condition. I am developing a modular input (using Splunk Add-on builder) that takes two input parameters, a Base URL and an API key. I am interested in validating both in my validate_input by sending a request to our server and if the response is good, then let the user save the input parameters. The problem is that my API key input parameter is of type password and in my validate_input method, I am only able to get its encrypted value in the form of ***********. How can I get its plain value so that I can use it to send request to the server? Here's my sample code for the validate_input method. def validate_input(helper, definition): """Implement your own validation logic to validate the input stanza configurations""" # This example accesses the modular input variable snx_base_url = definition.parameters.get('snx_base_url', None) snx_api_key = definition.parameters.get('snx_api_key', None) Please point me in a direction that can help me achieve the above. I shall be highly thankful to you! Regards, Umair
I am trying to configure SecKit with ES 6.1.1 but I am running into an issue with the configuration I am hoping someone has completed this and can shed some light. Configuration As an es_admin ... See more...
I am trying to configure SecKit with ES 6.1.1 but I am running into an issue with the configuration I am hoping someone has completed this and can shed some light. Configuration As an es_admin navigate to Splunk Enterprise Security From the Configure menu select General From the General menu select App Imports Update Click on “update_es” Append |(SecKit_[ST]A_.*) to the Application Regular Expression` Click Save When I go to the General Menu I do not see the option for App imports, I have looked around and have not seeing this at all. If I skip this step I can run the first search: | inputlookup seckit_idm_network_masks_lookup to validate that results are there. But when I run the next steps of saved searches I get errors. Run the search | from savedsearch: "seckit_idm_common_assets_networks_lookup_gen" This one works fine with no issues. Run the search | from savedsearch: "Identity - Asset String Matches - Lookup Gen" I get the following error: Error in 'savedsearch' command: Unable to find saved search named 'Identity - Asset CIDR Matches - Lookup Gen'. Run the search | from savedsearch: "Identity - Asset CIDR Matches - Lookup Gen" I get the following error: Error in 'savedsearch' command: Unable to find saved search named 'Identity - Asset CIDR Matches - Lookup Gen'. When I go to look for the searches I can not find them. I have used SecKit in the past and it was awesome I was hoping to get it up and running in Splunk 8 and ES 6.1.1. I have SecKit_SA_idm_common 3.0.8Rbaf6f27, SecKit_SA_idm_windows 3.0.4Ra988ca6, and SecKit_TA_idm_windows 1.0.3R4bb45a7 all installed.
For the first dropdown Panel I use $Type$ as a token, and I am able to use the "All" option. However, I have a second dropdown panel in my Splunk dashboard with options as well as "All". But the resu... See more...
For the first dropdown Panel I use $Type$ as a token, and I am able to use the "All" option. However, I have a second dropdown panel in my Splunk dashboard with options as well as "All". But the result of All should be all the dropdown options in it added together since there are values. However, I cannot use the wildcard symbol because the other dropdown selections uses the stats latest function, so using the wild card "All" will only take the "latest" instead of adding all the other dropdown values. So I have a new query that I would like to use for this one dropdown option. How do I implement that new query search for only when "All" is selected? Do I have to add a change tag in the Splunk xml for that option? This is the query I want for whenever one of the options is chosen: host=hostname sourcetype=syslog index=os_nix PMM_Status_Report| rex "(?P{.*})" | spath input=json_data path=json_path | mvexpand json_path | stats latest(Before_Today) as "Already Patched" by server_prefix, PatchActivityTitle, BusinessUnit |eval is_match=case(match("'$Type$'", "(?i)linux"), "linux", match("'$Type$'", "(?i)windows"), "windows", match("'$Type$'", "(?i)all"), "") |eval case = lower(PatchActivityTitle) | eval prefix=case(match("'$server_prefix2$'", "(?i)prdtx"), "prdtx", match("'$server_prefix2$'","(?i)prdjc"), "prdjc", match("'$server_prefix$'", "(?i)prded"), "prded", match("'$server_prefix$'", "(?i)all"), "")| table server_prefix, PatchActivityTitle, BusinessUnit, "Already Patched", is_match, case, prefix | where LIKE(server_prefix, "%".prefix."%") and LIKE(case,"%".is_match."%")| replace "'*'" WITH "*" | xyseries PatchActivityTitle, BusinessUnit, "Already Patched" | fillnull value=0 This is the query I want to show when the "ALL" option is chosen from the dropdown: host=hostname sourcetype=syslog index=os_nix PMM_Status_Report| rex "(?P{.*})" | spath input=json_data path=json_path | mvexpand json_path | stats latest(Before_Today) as "Already Patched" by server_prefix, PatchActivityTitle, BusinessUnit | eval is_match=case(match("'$PatchActivityType$'", "(?i)linux"), "linux", match("'$PatchActivityType$'", "(?i)windows"), "windows") | eval case = lower(PatchActivityTitle) | stats sum("Already Patched") as "Patched" by PatchActivityTitle, BusinessUnit, is_match, case | table PatchActivityTitle, BusinessUnit, "Patched", is_match, case| where LIKE(case, "%".is_match."%") | xyseries PatchActivityTitle, BusinessUnit, "Patched"| replace "'*'" WITH "*"| fillnull value=0
Hi, I am trying to export the custom report in PDF. It always puts all the widgets on a single page.  Tried with Grid layout as well as Absolute layout. But still, it shows all the data on a sin... See more...
Hi, I am trying to export the custom report in PDF. It always puts all the widgets on a single page.  Tried with Grid layout as well as Absolute layout. But still, it shows all the data on a single page. Is there a way to add page breaks in reports? Does anyone know how to create a report with multiple pages? ^ Edited by @Ryan.Paredez improved title.
I have a query whose one of the field named system is having values BMS or CCTV (total 10 fields are there in table format ) and I am showing the fields for those system in a table format and using t... See more...
I have a query whose one of the field named system is having values BMS or CCTV (total 10 fields are there in table format ) and I am showing the fields for those system in a table format and using the single value viz to show the values in the trellis format. I have applied some CSS to customize trellis. Now, I want to show that if System=BMS then all background color of all trellis fields should be changed to GREEN and if System=CCTV, then it should be changed to "AMBER", as if I have to show the whole row as GREEN or AMBER based on "System" field value. Also some field values are text and some are date along with numeric values, as attached in the image. My query is same, and I am just passing $token$ to select the Division based on which the value of "System" field changes ( in this case either BMS or CCTV). I tried using status indicator app, but somehow the trellis is not showing values for the fields in that, and hence I used "single value" viz. Attached are the images for your reference : My query is like below : .... | sort - Total | sort - Division | search Division=SAS | fields - Full_Name Division | search System=CCTV OR System=BMS where Division value is a token, when Division=SAS, then it returns System=CCTV and when Division=FM, it returns row System=BMS ? So, when System value is CCTV, the whole row background color should be GREEN and for System=BMS the background color should be AMBER and that too it should be in "single value" viz using trellis layout ? How can I achieve this ? ![alt text][2] [2]: /storage/temp/290867-query-output-table.png
Hi Experts, I am trying to find a string pattern "a word" in the primary search from source="123.log" and then from time line of each pattern matching event, i would like to display some 20 events... See more...
Hi Experts, I am trying to find a string pattern "a word" in the primary search from source="123.log" and then from time line of each pattern matching event, i would like to display some 20 events from a different source="456.log", prior to the time of events found from source="123.log". Even join and other subsearches did not help. As shown below, the Primary search is suppressing results to secondary search . index=index1 source="123.log" <pattern to find> |eval date_x=strftime(_time,"%m/%d/%Y:%H:%M:%S") |streamstats count |where count>0 |search index=index1 source="456.log" earliest=date_x|top limit=20 source Please help. Thank you.
Here is my log section. lines in bold are not being written to Splunk. They all are on different lines in Log so i expect they all will be individual events for log. Handler: "filewriter": fi... See more...
Here is my log section. lines in bold are not being written to Splunk. They all are on different lines in Log so i expect they all will be individual events for log. Handler: "filewriter": filewriter(FileWriterHandler) DEBUG 2020-05-14 14:32:19.000659 [pool-2-thread-1] - UserExitDataSource.getStatusReport DEBUG 2020-05-14 14:32:19.000659 [pool-2-thread-1] - [104250] getStatusReport: Thu May 14 14:32:19 PDT 2020 INFO 2020-05-14 14:32:19.000659 [pool-2-thread-1] - Memory at Status : Max: 455.50 MB, Total: 84.50 MB, Free: 19.48 MB, Used: 65.02 MB INFO 2020-05-14 14:32:19.000659 [pool-2-thread-1] - Status report: Thu May 14 14:32:19 PDT 2020 Status Report for UserExit Total elapsed time: 36 days 4:44:30.013 [total = 3127470 sec = 52124 min = 868 hr ] => Total time since first event Event processing time: 0:04:54.386 [total = 294 sec = 4 min ] => Time spent sending msgs (max: 131 ms) Metadata process time: 0:00:00.001 [total = 1 ms ] => Time spent receiving metadata (3 tables, 93 columns) Operations Received/Sent: 1584578 / 1584578 Rate (overall): 0 op/s (peak: 0 op/s) (per event): 5389 op/s Transactions Received/Sent: 37 / 37 Rate (overall): 0 tx/s (peak: 0 tx/s) (per event): 0 tx/s 1584578 records processed as of Thu May 14 14:32:19 PDT 2020 (rate 0/sec, delta 0) however, log behavior is This above entire section is written to log every 30 sec. So if i look at my log lines count it increases 31 lines every time cat qqqq.log|wc -l 505331 cat qqqq.log|wc -l 505362 I am not able to figure out why is that log being written partially? please help
So as per the Daily and Monthly selection index has to vary Daily = index1 Monthly=index2 Thanks in Advance! <fieldset> <label>Report Type</label> <choice value="Daily">Daily</ch... See more...
So as per the Daily and Monthly selection index has to vary Daily = index1 Monthly=index2 Thanks in Advance! <fieldset> <label>Report Type</label> <choice value="Daily">Daily</choice> <choice value="Monthly">Monthly</choice> </fieldset>
I have following lines in logs 1 ADM.ADMX policies Found ADM/ADMX policies How do I search to filter only 1 ADM/ADMX policies?
Hi All, I am using the Splunk Add-On for AWS to fetch the CloudWatch log group events, add-on is installed on HF and all the logs are getting TZ property from System TZ property of HF(EDT). Now I ... See more...
Hi All, I am using the Splunk Add-On for AWS to fetch the CloudWatch log group events, add-on is installed on HF and all the logs are getting TZ property from System TZ property of HF(EDT). Now I wanted to change the TZ for couple of CloudWatch log groups to UTC. Hence, I tried configuring the props.conf in the Splunk_TA_AWS/local with following settings [cloudwatch:lamba:groups] TZ = UTC But I don't see logs are getting this property getting applied for this sourcetype logs Is there some other way, we need to config TZ property for AWS logs.
Hi, i have configured a csv lookup in splunk. Now i want to change the content of csv file so that it gets updated in splunk lookup search. Is there a way to this
Need to find out suspicious IPs and count of hits (sub search) use those IPs and do outer search in same time frame of each result of subsearch show fields of outer search and inner search coun... See more...
Need to find out suspicious IPs and count of hits (sub search) use those IPs and do outer search in same time frame of each result of subsearch show fields of outer search and inner search count index=trace type=success | eval temp=split(ip,",") |eval src=mvindex(temp,0) | search [search index=trace type=blocks | bin span=10m _time |eval temp=split(ip,",") |eval src=mvindex(temp,0)|stats count by src| where count > 50| fields src] | table _time src route Now i am getting wrong results because ip is dynamic (once ip used by attacker may be genuine ip at other time, i am getting genuine results of suspicious IP used once - time picker is last 6 months.).. now i want to search outer query in same timeframe of each subsearch result (need to find ip of success type who are blocked more than 50 times at same time) Thanks in Advance.
How to get users(SAML authenticated) list who searched for data under particular index(_internal) in the last 24hrs.