All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have 1 master cluster and 2 indexers . I've been using configuration bundle actions - push , several times. The last time I used this process I was trying to push an add-on the resul... See more...
Hi All, I have 1 master cluster and 2 indexers . I've been using configuration bundle actions - push , several times. The last time I used this process I was trying to push an add-on the results on both of the indexers were: Added certificates , after that I edit the server.conf on both indexers with the correct URL and still didn't work. Any suggestion? Thanks! Hen
Hello, I have events without a timestamp like epochtime or a format like 2020-02-03 18:41:00. The needed information is kind of split up in the raw event. Raw events look like this sample: so... See more...
Hello, I have events without a timestamp like epochtime or a format like 2020-02-03 18:41:00. The needed information is kind of split up in the raw event. Raw events look like this sample: sometext, date,20200203, some text, some text, time,184100, some text Is it possible to create a useful timestamp extraction of out this during the data onboarding? Best regards
I'm trying to get bitlocker events into Splunk. Below is what I have in the inputs.conf and it appears to not be working. [WinEventLog://Microsoft-Windows-BitLocker/BitLocker Management] index =... See more...
I'm trying to get bitlocker events into Splunk. Below is what I have in the inputs.conf and it appears to not be working. [WinEventLog://Microsoft-Windows-BitLocker/BitLocker Management] index = main disabled = 0
I am using the Splunk app for AWS and utilizing the Custom Data Type Input, SQS Based S3 to pull in reports in xml format that are being dropped off in an S3 bucket. I setup a notification for the S... See more...
I am using the Splunk app for AWS and utilizing the Custom Data Type Input, SQS Based S3 to pull in reports in xml format that are being dropped off in an S3 bucket. I setup a notification for the S3 to send a message to SNS which goes to SQS. Simple text files seem to work fine when PUTing them into the S3 bucket, however the reports I'm receiving are anywhere from 5 - 25MBs big so it goes over the SQS message size limit and I am guessing this is why I cannot receive these reports. I cannot find anywhere in the documentation that goes over what to do in this type of situation. There seems to be no errors in Cloudtrail or the splunkd.log either so I have no way to trace where it is failing. Any ideas would be of great help.
Because of reasons, I need to find a way to find every customized config parameter of an app placed in the local dir. I get some info using rest command but not exactly that and not sure if possible.... See more...
Because of reasons, I need to find a way to find every customized config parameter of an app placed in the local dir. I get some info using rest command but not exactly that and not sure if possible. If this is not possible, getting all the config for each object in an app could work too. It should be something like the list generated under the "all configurations", but I need the details of each object. The same is shown when clicked in one of those objects in the "all configurations" menu. Any clue? Thanks!
Hello, In order to detect excessive failed logins we use the correlation search below: | tstats summariesonly=true allow_old_summaries=true values(Authentication.tag) as "tag", values(Authenti... See more...
Hello, In order to detect excessive failed logins we use the correlation search below: | tstats summariesonly=true allow_old_summaries=true values(Authentication.tag) as "tag", values(Authentication.user) as user, values(Authentication.sourcetype) as sourcetype, dc(Authentication.user) as "user_count",dc(Authentication.dest) as "dest_count",count from datamodel=Authentication.Authentication where nodename=Authentication.Failed_Authentication by "Authentication.app","Authentication.src" | `drop_dm_object_name("Authentication")` | where 'count'>=6 For some reason it does not return the values of sourcetype and tag fields, it stays empty. There is no issue with other fields like user and dest . A simple |from datammodel:Authentication... search returns all fields' values as well. Do you have an idea what the issue is caused by and how it could be fixed? Many thanks!
Is there a recommended number of CPU cores for client workstation accessing Splunk ES? The company is running virtual consoles with very conservative resource allocation. Splunk being JavaScript... See more...
Is there a recommended number of CPU cores for client workstation accessing Splunk ES? The company is running virtual consoles with very conservative resource allocation. Splunk being JavaScript based, I figured this can result in lagging and overall decreased user experience.
Dear Splunk experts, Dear community, I am currently planning a change in our Splunk environment to increase reliablity and scalability. Currently running a single indexer with a number of Search He... See more...
Dear Splunk experts, Dear community, I am currently planning a change in our Splunk environment to increase reliablity and scalability. Currently running a single indexer with a number of Search Heads. The goal is to set up the environment to continue operations in case of any single host outage. Would like to set up a cluster of two indexers for this. We store indexes on mirrored SAN so that it will be operable if the main node is down - standby will have full copy of data. It is possible to split volume on SAN to two equal parts, make partitions for the indexers and set Replication factor = 2. In that case we will have four copies of data stored (2 peers * 2 SAN nodes) and twice less volume for indexes. Is there a better way to store data in our case without number of copies overkill and with no loss of capacity? Setting RF=1 is not an option because half of indexed data will be not available in case of an indexer peer loss. Can we make two indexer peers work with the same SAN partition for writing and reading data? Thank you!
Hi, I created a lot of panels for my dashboards. Now I want to hide the kinks at the bottom of each panel. This should work like this <option name="refresh.link.visible">false</option> <opt... See more...
Hi, I created a lot of panels for my dashboards. Now I want to hide the kinks at the bottom of each panel. This should work like this <option name="refresh.link.visible">false</option> <option name="link.openSearch.visible">false</option> <option name="link.openPivot.visible">false</option> <option name="link.inspectSearch.visible">false</option> <option name="link.exportResults.visible">false</option> Is there a way to implement it for all panels without adding it via simple xml. Perhaps in a conf or something similar? Hope you can help me. Best regards Thomas
As mentioned in the documentation i am trying to create a search but I'm not getting the expected response. https://docs.splunk.com/Documentation/Splunk/8.0.1/RESTTUT/RESTsearches#Tips_on_accessin... See more...
As mentioned in the documentation i am trying to create a search but I'm not getting the expected response. https://docs.splunk.com/Documentation/Splunk/8.0.1/RESTTUT/RESTsearches#Tips_on_accessing_searches I'm getting below response. <title>jobs</title> <id>https://xyz:8089/services/search/jobs</id> <updated>2020-02-03T06:11:04-08:00</updated> <generator build="7af3758d0d5e" version="7.3.3"/> <author> <name>Splunk</name> </author> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>0</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> As per the documentation, i am suppose to receive the sid. Can someone help, what is going wrong?
Use Case: We currently have Splunk Mint providing data on a mobile application via the MINT Data Collector. The data and dashboard are providing the service we need for our developers, but we want a ... See more...
Use Case: We currently have Splunk Mint providing data on a mobile application via the MINT Data Collector. The data and dashboard are providing the service we need for our developers, but we want a more (internal) customer friendly view, and/or something we can integrate into already existing dashboards with our own custom calculations. Troubleshooting: Extensive searches on this site regarding the issue have garnered no explicit answers. Phone calls to support and sales resulted in unanswered voicemails. Problem Question: Is there a way to access the data MINT is using to create the Management Console? To all appearances there is possibly an option using Splunk Enterprise by somehow forwarding or importing the data into Enterprise, then forwarding/exporting that data to a "third party provider" (which would, ostensibly, be our data store). Is this actually possible, and/or is there any other option to this?
Email not send to mail box because of above error in SH Cluster Splunk version :7.0.0 index=_internal source=scheduler.log savedsearch_name=********* |table _raw,_time,message,status,alert_act... See more...
Email not send to mail box because of above error in SH Cluster Splunk version :7.0.0 index=_internal source=scheduler.log savedsearch_name=********* |table _raw,_time,message,status,alert_actions index=_internal source=*python.log subject="***********"|table * got the below error from logs :: command="sendemail", createSSLContextFromSettings() got an unexpected keyword argument 'confJSON' while sending mail to: **@mail.com
Need help in formatting a regex comand output. Program that I created: index=opennms "bigipServiceDown" | rex field=eventlogmsg "bigipNotifyObjMsg=(?<POOL>.+down. )" | table POOL, nodela... See more...
Need help in formatting a regex comand output. Program that I created: index=opennms "bigipServiceDown" | rex field=eventlogmsg "bigipNotifyObjMsg=(?<POOL>.+down. )" | table POOL, nodelabel Output : POOL nodelabel Pool /Common/tiger.exxonmobil.com-443-pl member /Common/10.159.217.11:443 monitor status down. [ /Common/https-vdi-connection_manager: down; INMCOIGW-APNADC003 Pool /Common/tiger.exxonmobil.com-443-pl member /Common/10.159.217.11:443 monitor status down. [ /Common/https-vdi-connection_manager: down; INMCOIGW-APNADC003 Pool /Common/tiger.exxonmobil.com-443-pl member /Common/10.159.217.11:443 monitor status down. [ /Common/https-vdi-connection_manager: down; INMCOIGW-APNADC003 Expected output : POOL Monitor VDI nodelabel tiger.exxonmobil.com-443-pl member 10.159.217.11:443 monitor status down Common/https-vdi-connection_manager: down INMCOIGW-APNADC003 leopard.exxonmobil.com-443-pl member Common/vdi-pnh.ap.xom.com:443 monitor status down Common/https-vdi-connection_manager-pnh: down INMCO-APNADC104 RAW Data : eventlogmsg=""<p> bigipServiceDown trap received bigipNotifyObjMsg=Pool /Common/leopard.exxonmobil.com-443-pl member /Common/vdi-pnh.ap.xom.com:443 monitor status down. [ /Common/https-vdi-connection_manager-pnh: down; last error: /Common/https-vdi-connection_manager-pnh: Response Code: 404 (Not Found) @2020/02/03 07:06:46. ] [ was up for 0hr:49mins:15sec ] (slot2) bigipNotifyObjNode=/Common/vdi-pnh.ap.xom.com bigipNotifyObjPort=443</p>""
HI My system admins are having issues with the Splunk server on the /var. They are saving it is heavily used. (ONLY in the day time does this look like it is happening!). For example from 9:30 ... See more...
HI My system admins are having issues with the Splunk server on the /var. They are saving it is heavily used. (ONLY in the day time does this look like it is happening!). For example from 9:30 this morning we have written 600MB in 4 hours. SO they are having to clean it down etc.. We do have alerts, but not at this frequency, any idea what could be going on? Thanks Robert Lynch
Hello, I've been trying to send emails automatically to receipients from search results, below my code : ... | eval email_footer=" " | eval email_subject="Alert something" | eval email_message... See more...
Hello, I've been trying to send emails automatically to receipients from search results, below my code : ... | eval email_footer=" " | eval email_subject="Alert something" | eval email_message="Dear colleague, We received an IT alert regarding something. Should you have any question, please contact us. Best regards. " | map search="sendemail server="smtp.company.com" from="noreply@company.com" to="$BusinessEmail$" footer="$email_footer$" subject="$email_subject$" message="$email_message$"" I'm facing a number of anomalies with this : My search works randomly (last time I tried I was getting 2 successful attempts out of 5). Hardcoding the "to" for testing does make the sendemail work but this shows an empty email (at least $email_message$ not taken into consideration if not $email_footer$ too). When the search fails, the error I get is: command="sendemail", {} while sending mail to: with no receipient address nor details on why it failed. I have also tried other variables for recipient but with no success. Using sendemail directly without map has same effects. Any help or leads to troobleshoot this would be appreciated, I'm having difficulties finding answers on the search.log file. Best regards.
We have a single server that is running indexer, master and search head. As we only have 1 server, it is a single point of failure. We were thinking to put in place Splunk cluster solution, so our... See more...
We have a single server that is running indexer, master and search head. As we only have 1 server, it is a single point of failure. We were thinking to put in place Splunk cluster solution, so our Splunk infrastructure would be resilient. To deploy Splunk cluster we were thinking to use 2 servers: * Server A: Indexer, master, search head. * Server B: Indexer, master in standby, search head. The documentation (https://docs.splunk.com/Documentation/Splunk/8.0.1/Indexer/Keydifferences) says "The master node, peer nodes, and search head must each run on its own instance." Does anyone know why the 3 components have to be in different instances?
Hello, I need to transform the table I have from: _time avg1 avg2 avg3 t1 v11 v21 v31 t2 v12 v22 v32 t3 v13 v23 v33 into _time KPI VALUE t1 a... See more...
Hello, I need to transform the table I have from: _time avg1 avg2 avg3 t1 v11 v21 v31 t2 v12 v22 v32 t3 v13 v23 v33 into _time KPI VALUE t1 avg1 v11 t2 avg1 v12 t3 avg1 v13 t1 avg2 v21 t2 avg2 v22 t3 avg2 v23 t1 avg3 v31 t2 avg3 v32 t3 avg3 v33 I need this format to create a punchcard visualization out of it later. How would I achieve this? Kind regards, Kamil
Hi, I'm trying do set up addon Splunk_TA_jboss and every time I get the same response: Your entry was not saved. The following error was reported: SyntaxError: Unexpected token S in JSON at positio... See more...
Hi, I'm trying do set up addon Splunk_TA_jboss and every time I get the same response: Your entry was not saved. The following error was reported: SyntaxError: Unexpected token S in JSON at position 46. When i'm using Splunk_TA_jmx everything looks fine. (data flows) In both i use same config entries: Global Settings JBoss JMX URL* (example: service:jmx:remoting-jmx://127.0.0.1:9999) service:jmx:http-remoting-jmx://192.168.1.2:9990 JBoss JMX user name*:user JBoss JMX password*:pass Confirm password: pass Index: jboss Log level: DEBUG Data collection settings: [v] Enable data collection from JBoss log files. I will be grateful for Your help Piotr
Hi All, Hope you all are doing well. I was trying to setup email alert and event creation using Splunk and it was working fine. But i got a new condition in the existing alert. The condition... See more...
Hi All, Hope you all are doing well. I was trying to setup email alert and event creation using Splunk and it was working fine. But i got a new condition in the existing alert. The condition is to avoid 2 alerts and event creation when there is a specific alert. In my case when there is ABC alert then i have to ignore XYZ and PQR. Logic seems to be simple, when ABC comes avoid XYZ and PQR.... But i am unable to execute it on Splunk. I tried below query but i think it will yield a null when there are any other alerts apart from the ABC. index="myindex" sourcetype="mysourcetype" lab_hub_name="XYZ Hub" rag_status="0" ( lab_hub_tag="LKJ" OR lab_hub_tag="ABC" OR lab_hub_tag="PQR" OR lab_hub_tag="XYZ" OR lab_hub_tag="QWE" OR lab_hub_tag="ERT" OR lab_hub_tag="FGH") earliest=-7m latest=now | stats latest(_time) as latest_tim, count by lab_hub_tag | rename count as rag_count | join type=left lab_hub_tag [search index="myservicenow" sourcetype="snow:incident" short_description="Splunk Alert - XYZ*" state!="7" earliest=-1d latest=now | rex field=short_description "Splunk Alert - XYZ - (?[\S ]+$)" | stats latest(state) as state, count by lab_hub_tag short_description | fields - count] | eval one=if(lab_hub_tag="ABC" AND rag_count>0,"yes","Null") | search one= yes | fillnull state short_description | eval temp_count=if(rag_count>0 AND state="6" OR state="0",1,0) | eval correlation_id=latest_tim.lab_hub_tag | where temp_count=1 Please let me know how i can achieve this one. Thanks for your help.
Palo Alto firewall device (IPS and IDS only) is sending logs to rsyslog server and it gets saved in a directory. The logs from that directory is being ingested using syslog inputs from the path. ... See more...
Palo Alto firewall device (IPS and IDS only) is sending logs to rsyslog server and it gets saved in a directory. The logs from that directory is being ingested using syslog inputs from the path. I have installed "PaloAlto for Networks Add-On" on Syslog servers which also acts as Heavy forwarder only (not on indexers as we are using Splunk Cloud). As I mentioned the sourcetype as "pan:log", Splunk have segregated the log as "pan:threat", "pan:system" and "pan:traffic". I believe the parsing of logs should happen on this HF. But, I dont see any field extraction is happening. Only the sourcetype segregation is working. Without field extraction, it is tough to do analysis by Security Operation Center. Should i consider about log format? The default log format is being used. Does Field extraction work only after I install the Add-On on Splunk Cloud Search Head or Indexers? When Heavy Forwarder can parse the data, should I still install on Splunk Cloud?