All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a parameter using which we can filter the licenses from the below command ? I need to filter and check only enterprise license is applied or not during my script (shell script), is there a w... See more...
Is there a parameter using which we can filter the licenses from the below command ? I need to filter and check only enterprise license is applied or not during my script (shell script), is there a way I can pass any parameter like stack_id etc? or it should be done via linux commands ? filter only one pool, check the size and see compare it with the actual size and understand if all the license files are applied properly. No manual intervention. can't see anything here : https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/LicenserCLIcommands splunk list licenser-pools auto_generated_pool_enterprise description:auto_generated_pool_enterprise effective_quota:123456 is_unlimited:0 quota:MAX slaves: stack_id:enterprise used_bytes:1234 auto_generated_pool_forwarder description:auto_generated_pool_forwarder effective_quota:123456 is_unlimited:0 quota:MAX slaves: stack_id:forwarder used_bytes:0 auto_generated_pool_free description:auto_generated_pool_free effective_quota:123456 is_unlimited:0 quota:MAX slaves: stack_id:free used_bytes:0  
How to configure Univesal fowarder to send snmp traps in unix server and how to collect? whats the use of snmp modular input and where to install it? on the fowarder side or splunk instance? can... See more...
How to configure Univesal fowarder to send snmp traps in unix server and how to collect? whats the use of snmp modular input and where to install it? on the fowarder side or splunk instance? can we use syslog for snmp traps??
with help of @vnravikumar I created one button next to edit button in dashboard similar to export button which contains links of other dashboards. for more detail please refer-https://answers.splunk.... See more...
with help of @vnravikumar I created one button next to edit button in dashboard similar to export button which contains links of other dashboards. for more detail please refer-https://answers.splunk.com/answers/819635/create-button-similar-to-export-button-in-dashboar.html Now my requirement is I need to create one lookup which contains two columns like source and destination where source will contains all dashboard names and destination will contains those dashboards names whose links to be provided on source dashboards under new button . to make relationship between dashboard in lookup so whenever lookup dashboard column change accordingly it should update those dashboards url under newly created button. Note- If there is any other better way that are also welcome. thanks,
Hi, I really need help with this issue. I need to collect logs using REST from a web resource. I'm trying for a lot of time to do it by myself, unfortunately I got stock in the final step. I ha... See more...
Hi, I really need help with this issue. I need to collect logs using REST from a web resource. I'm trying for a lot of time to do it by myself, unfortunately I got stock in the final step. I have a 'curl' command I'm running against the web resource and I see the logs on my shell screen, all I'm trying to do is to convert this command to a valid REST call using the 'Add-on Builder' but I just can't finish it successfully. Attached here is a masked version of my 'curl' command, please help me to get it done somehow so I'll be able to collect the logs. curl -X POST 'https://api.company.webresource.com/v2/logs/audit' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Authorization:Bearer 196e6h17-798a-4e2c-64hr-xxxxxxxxxxxx' -i -d '{"query":{"from_date":1587848400000}}' Thanks in advance!
Last 2 months we have network issue and Office365 Splunk Add-on stops collecting logs, Recently, the network issue was fixed and the new logs started to come in. Is there way we can pull the logs fro... See more...
Last 2 months we have network issue and Office365 Splunk Add-on stops collecting logs, Recently, the network issue was fixed and the new logs started to come in. Is there way we can pull the logs from the last 2 months? I believe Office365 Audit log retention is 90 days so I hop it is possible. Please help.
Hey Guys! So i have been having some trouble with indexing a table that grows as high as 300k Rows in an hour. I have a "rising column" set to a column that grows with each new row. That be... See more...
Hey Guys! So i have been having some trouble with indexing a table that grows as high as 300k Rows in an hour. I have a "rising column" set to a column that grows with each new row. That being said. It seems like the rising columns arent all being captured properly. It will go from 48527750 to 48527754 and skip the ones in between. If i look in the database i will find ALL events from 48527750 to 48527754. That are all there. I know there is a know issue with the TIme as a "Rising Column" but a column that is dependent on the row selected. Would there be anything that would point to a reason things are being skipped ?
I have installed a free version of Splunk and installed RSS Scripted Input as per the README document. rssfeed2.py file has base_url = 'https://localhost:8089' username = 'administrator' pa... See more...
I have installed a free version of Splunk and installed RSS Scripted Input as per the README document. rssfeed2.py file has base_url = 'https://localhost:8089' username = 'administrator' password = 'administrator' earliest = 'earliest=-24h' f = open(sys.argv[1], 'r') Login and get the session key request = urllib2.Request(base_url + '/servicesNS/admin/search/auth/login', data = urllib.urlencode({username: 'administrator', password: 'administrator'})) server_content = urllib2.urlopen(request) session_key = minidom.parseString(server_content.read()).getElementsByTagName('sessionKey')[0].childNodes[0].nodeValue I am getting different errors upon correcting this line: data = urllib.urlencode({username: 'administrator', password: 'administrator'})) like: 1) data = urllib.urlencode({'username': 'administrator', 'password': 'administrator'})) 2) data = urllib.urlencode({username: 'administrator', password: 'administrator'})) Can you please correct me what's wrong here?
How to fetch activity id using rex command Log record: DATA= {"note":"Succeeded | {\r\n \"service.url\": \"\",\r\n \"enable.debug\": false,\r\n \"permission.base.url\": \"\",\r\n \"userI... See more...
How to fetch activity id using rex command Log record: DATA= {"note":"Succeeded | {\r\n \"service.url\": \"\",\r\n \"enable.debug\": false,\r\n \"permission.base.url\": \"\",\r\n \"userInfo.base.url\": \"\",\r\n \"storeId.base.url\": \"\",\r\n \"app.session.timeout\": 25\r\n},activityId: 64AB3318-4DA3-4D38-9800-5DABCC7EC263,","appVersion":"1.10" I tried below query with no luck, | rex field=DATA ",activityId: (?<ACTIVITY_ID_VALUE>.*)" Trying to fetch value "64AB3318-4DA3-4D38-9800-5DABCC7EC263" and show in table
Hi all I'm testing sendemail command but it's not sending email. here's my search code index=main | table _time | sendemail to=tkdguq0110@gmail.com subject=sendemail_test server=mail.goo... See more...
Hi all I'm testing sendemail command but it's not sending email. here's my search code index=main | table _time | sendemail to=tkdguq0110@gmail.com subject=sendemail_test server=mail.google.com and here's a log above 2020-05-04 19:32:27,700 +0900 ERROR sendemail:1435 - 'utf8' codec can't decode byte 0xbf in position 14: invalid start byte Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\search\bin\sendemail.py", line 1428, in results = sendEmail(results, settings, keywords, argvals) File "C:\Program Files\Splunk\etc\apps\search\bin\sendemail.py", line 474, in sendEmail errorMessage = str(e) + ' while sending mail to: ' + ssContent.get("action.email.to") UnicodeDecodeError: 'utf8' codec can't decode byte 0xbf in position 14: invalid start byte I do not know what the problem is. I would appreciate if you give me any help Thanks
Hello, Following the recent update of the cisco asa TA to new major version 4.0.0, we have tested this on a test server with some cisco asa logs copied from our production. Log extraction is ... See more...
Hello, Following the recent update of the cisco asa TA to new major version 4.0.0, we have tested this on a test server with some cisco asa logs copied from our production. Log extraction is good (even if the props and transforms files have drastically changed) and is more granular than before. However, we encountered an issue concerning the "action" field that is very important with datamodels and enterprise security because it needs to be formatted like action=allowed OR action=teardown or action=blocked. In fact, with regex extraction, from the raw logs, cisco asa TA is extracting values like "Deny", "Built" or "Teardown" and then there is a lookup called "cisco_asa_action_lookup" that match those actions and rewrite with the CIM compatibility (allowed, teardown or blocked). But since 4.0.0 is not the case anymore, I mean the lookup has drastically changed too. Before 4.0.0 , if you take a "Deny" firewall event we had in the lookup the following translation : vendor_action,action deny,blocked and effectively the action field was changed from "deny" to "blocked" but now we have a lookup with (still with deny for example) : vendor_action,message_id,action deny,,deny The workaround for us is to change the values in this lookup in order to be back to normal but I am not sure, is this a missing from the TA developper or is me ? Because the TA is "CIM compliant" but it's seems to not be the case here... What are your thoughts ? thanks in advance for the help Vince
I have an alert that triggers when the search returns 0 events for the last couple of hours and sends a slack message. It runs every 5 minutes on cron and looks a few hour back. However, for some rea... See more...
I have an alert that triggers when the search returns 0 events for the last couple of hours and sends a slack message. It runs every 5 minutes on cron and looks a few hour back. However, for some reason that I don't know, the alert false triggers some times when it should not and when I manually do the search for which it triggered, I see a bunch of events during that time span. This happens once a month or so. So if anyone know a solution for this or why this happens that would be great. If not, I'm thinking of changing the alert so that it only triggers if the result is 0 for 2 searches in a row (5 minutes in between) to avoid the false triggers. Is it possible to do this and how?
Hi! We're using this app in our test-Splunk environment which is running Splunk 7.3.2 We want to put this in our production environment, but that environment is running Splunk 8.0.1 I can see ... See more...
Hi! We're using this app in our test-Splunk environment which is running Splunk 7.3.2 We want to put this in our production environment, but that environment is running Splunk 8.0.1 I can see on the ThreatHunting app page it says max version 7.3 Will the app need an update to function on Splunk 8? If so, is it in the works? Regards, Benjamin
Hi Team, What is the Time_Format for Tue Sep 17 12:43:09.925775 2019 I am not able to get it exactly from the below link https://docs.splunk.com/Documentation/Splunk/8.0.3/SearchReference/Common... See more...
Hi Team, What is the Time_Format for Tue Sep 17 12:43:09.925775 2019 I am not able to get it exactly from the below link https://docs.splunk.com/Documentation/Splunk/8.0.3/SearchReference/Commontimeformatvariables
Hello, I'm trying to use the tstats command within a data model on a data set that has children and grandchildren. Ideally I'd like to be able to use tstats on both the children and grandchil... See more...
Hello, I'm trying to use the tstats command within a data model on a data set that has children and grandchildren. Ideally I'd like to be able to use tstats on both the children and grandchildren (in separate searches), but for this post I'd like to focus on the children. Let's say my structure is the following: data_model --parent_ds ----child_ds And let's say we have _time , id , dimension , status , and error as fields. Assuming that parent_ds has no filter on the dimension field, child_ds will have an additional constraint for a specific value of dimension . I am able to use the tstats command to extract the values from parent_ds with the following search: | tstats latest(_time) as _time values(parent_ds.status) as status values(parent_ds.error) as error FROM datamodel=data_model.parent_ds BY parent_ds.id Since I would like to run this same search on child_ds I tried the following: | tstats latest(_time) as _time values(child_ds.status) as status values(child_ds.error) as error FROM datamodel=data_model.child_ds BY child_ds.id When doing this I get the following error: Error in 'DataModelCache': Invalid or unaccelerable root object for datamodel I've also tried nesting by specifying parent_ds.child_ds.<field> but that doesn't work either. Is it possible to accomplish what I'm trying to do? If so, could somebody point me in the right direction? Thank you and best regards, Andrew
I am in need of a query that will list indexes not searched in the last 30 days.
Hello, I've seen similar questions like this one, but not exactly what I'm looking for. I've managed to create buckets in the chart where those buckets don't have values, but what I'd like to do i... See more...
Hello, I've seen similar questions like this one, but not exactly what I'm looking for. I've managed to create buckets in the chart where those buckets don't have values, but what I'd like to do is add the zero label as well (the "Show Data Values" option). Here is an image showing what I mean: Is this possible? Is there some CSS code that will allow me to do it? Thanks! Andrew
I have two sources - /var/log/secure - /var/log/audit/audit.log Here is my SPL so far (index=* source="/var/log/secure" AND "*sudo*" AND ("*chown*" OR "*useradd*" OR "*adduser*" OR "*use... See more...
I have two sources - /var/log/secure - /var/log/audit/audit.log Here is my SPL so far (index=* source="/var/log/secure" AND "*sudo*" AND ("*chown*" OR "*useradd*" OR "*adduser*" OR "*userdel*" OR "*chmod*" OR "*usermod*") AND COMMAND!="*egrep*") OR (index="*" source="/var/log/audit/audit.log" addr!=? res=success* [search index=* source="/var/log/secure" AND "*sudo*" AND ("*chown*" OR "*useradd*" OR "*adduser*" OR "*userdel*" OR "*chmod*" OR "*usermod*") AND COMMAND!="*egrep*" | dedup date_month date_mday | fields date_month date_mday]) | regex _raw!= ".*user NOT in sudoers.*" | rename acct as Users | rex field=_raw "(?<=sudo:)\s*(?P[[:alnum:]]\S*[[:alnum:]])\s*(?=\:).*(?<=COMMAND\=)(?.*)" | eval "Command/Events" = replace(command,"^(\/bin\/|\/sbin\/)","") | eval Users = if(match(Users,"(?<=[[:alnum:]])\@[[:alnum:]]\S*[[:alnum:]]"), replace(Users,"(?<=[[:alnum:]])\@[[:alnum:]]\S*[[:alnum:]]",""), if(match(Users,"[[:alnum:]]+\\\(?=[[:alnum:]]\S*[[:alnum:]])"), replace(Users,"[[:alnum:]]+\\\(?=[[:alnum:]]\S*[[:alnum:]])","") ,Users)) | eval Time = if(source=="/var/log/secure" ,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()), Date = strftime(_time, "%Y-%d-%m") | eval "Report ID" = "ABLR-007" | eval "Agency HF" = if(isnull(agencyhf),"",agencyhf) | stats list(Time) as Time list("Command/Events") as "Command/Events" latest(addr) as "IP Address" by Users Date host index "Report ID" "Agency HF" | where 'Command/Events' !="" | eval counter=mvrange(0,mvcount(Time)) | streamstats count as sessions | stats list(*) as * by sessions counter | foreach Time "Command/Events" [ eval <> = mvindex('<>', counter)] | fields - counter sessions | rename index as Agency, host as Hostname | fields "Report ID" Time Agency Command/Events Hostname Users "IP Address" "Agency HF" Problem The SPL runs slow when I have a big data. I just want to know if it its possible to trim down the results returned by /var/log/audit/audit.log by passing the latest time in /var/log/secure. For example the latest record in /var/log/secure is May 5 2020, 2pm... is it possible to run a search for the other one, /var/log/audit/audit.log , that is from May 5 2020 00:00 to May 5 2pm? and if I have other events too like Feb 3 8 pm as the latest time... can I achieve it?
Hi Splunkers, Please find attached image, this is the way i am getting my data. My desired format is : Hostname | Microsoft .NET Framework 4.5.1 | Microsoft POS for .NET 1.12 | JAVA 8 Update... See more...
Hi Splunkers, Please find attached image, this is the way i am getting my data. My desired format is : Hostname | Microsoft .NET Framework 4.5.1 | Microsoft POS for .NET 1.12 | JAVA 8 Update 60 | UniversalForwarder | - ------- hostA | 4.5.50938 | 1.12.1296 | 8.0.600 | 7.2.5 | - ------- My procedure is, to break the events with multikv and then use transpose. But multikv is not functioning as desired. TIA,
When checking for errors at the platform I started noticing error events in the _internal log: 2020-05-04 02:08:56,972 ERROR [itsi_re(reId=V26C,reMode=RealTime)] [main] TaskManager:604 - FunctionN... See more...
When checking for errors at the platform I started noticing error events in the _internal log: 2020-05-04 02:08:56,972 ERROR [itsi_re(reId=V26C,reMode=RealTime)] [main] TaskManager:604 - FunctionName=ProcessSplunkSearchJobResults, Status=Failed, ErrorMessage="For input string: "1588515619,432"" Somehow the input timestamp has a comma instead of a dot. Also Episode Review is showing "Invalid date" for the initial date. I traced down the first search and it was itsi_event_grouping using the itsi_event_management_group_index_with_close_events macro. This macro brings the itsi_first_event_time variable, which has the incorrect timestamp, including a comma instead of a dot: 1588515619,432. As a quick fix for the macro I appended a function that replaces comma to a dot, but it hasn't changed the Episode Review dashboard 'invalid date' message. In the spanish number format comma is used for decimals instead of a dot, it might be related somehow, because i'm using those locales in linux. > LANG=es_CL.UTF-8 > LC_CTYPE="es_CL.UTF-8" > LC_NUMERIC="es_CL.UTF-8" > LC_TIME="es_CL.UTF-8" > LC_COLLATE="es_CL.UTF-8" > LC_MONETARY="es_CL.UTF-8" > LC_MESSAGES="es_CL.UTF-8" > LC_PAPER="es_CL.UTF-8" > LC_NAME="es_CL.UTF-8" > LC_ADDRESS="es_CL.UTF-8" > LC_TELEPHONE="es_CL.UTF-8" > LC_MEASUREMENT="es_CL.UTF-8" > LC_IDENTIFICATION="es_CL.UTF-8" Any help to resolve this issue is greatly appreciated!
When upgrading apps/add-ons in a distributed environment, is there a recommended best practice or is it similar to deploying the app initially where I can just paste the newer downloaded version from... See more...
When upgrading apps/add-ons in a distributed environment, is there a recommended best practice or is it similar to deploying the app initially where I can just paste the newer downloaded version from Splunkbase over the existing app and then push the new bundle to the peers to fully update the app? And also is there any scenario where a rolling restart for this wouldn’t be required? Thanks in advance