All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I was trying to join multiple lines generated in /var/log/secure. I tried with transaction but looks like that doesn't work in this case. Below is the example of secure file.  In this case, I want t... See more...
I was trying to join multiple lines generated in /var/log/secure. I tried with transaction but looks like that doesn't work in this case. Below is the example of secure file.  In this case, I want to combine all these lines based on a common text "sshd[288792]". Your help on this would be really appreciated. I cannot search with same keyword as id in this sshd "288792" will be different for each sessions.  Jan 25 18:34:06 SERVER1 sshd[288792]: Connection from xxx.xxxx.xxx.xxx port xxxx on xxx.xxx.xxx.xxx port xx Jan 25 18:34:10 SERVER1 sshd[288792]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid= Jan 25 18:34:10 SERVER1 sshd[288792]: pam_sss(sshd:auth): User info message: Your password will expire Jan 25 18:34:10 SERVER1 sshd[288792]: pam_sss(sshd:auth):  success; logname= uid=0 euid=0 Jan 25 18:34:10 SERVER1 sshd[288792]: Accepted  for xxxx from xxx.xxx.xxx.xxx port xxxxx xxx Jan 25 18:34:10 SERVER1 sshd[288792]: pam_unix(sshd:session): session opened for user xxxxx by (uid=0) Jan 25 18:34:10 SERVER1 sshd[288792]: User child is on pid 289788    
I have a dashboard that shows a bunch of different metrics for some data that I have. One of the metrics compares todays counts vs an average of the last four weeks on the same weekday and up to the ... See more...
I have a dashboard that shows a bunch of different metrics for some data that I have. One of the metrics compares todays counts vs an average of the last four weeks on the same weekday and up to the same time of the day. So in essence my data looks kind of like this.   type avg_past today ---------------------------------- foo 10456 10550 bar 6 9 baz 20 30 etc...   I've got this charting to a bar graph where I can see for each type the past average vs today. What I would like to do is only show the ones where there is a statistically significant difference between the past and today. I could throw on something like this  to my search ...   | where today > (avg_past * 1.25)   And that will work ok for types that have lots of data, but for instance in my example above "bar" has 50% more data. So, it would also show up, but it's really not statistically significant. So, my predicament is that I need the percentage different to be larger the smaller the counts and smaller as the counts go up. Thoughts on how to achieve this? Thanks.
Does Splunk Enterprise  8.2.4 60 days Eval have the same limitation with the Zscaler app and Zscaler add-on like Splunk Cloud  ?   Thanks Asif
Hello I use an input time token called "timepicker" <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest>  Is there a way to call this input time token directly i... See more...
Hello I use an input time token called "timepicker" <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest>  Is there a way to call this input time token directly in my search ? Someting like this : Index=toto sourcetype=tutu earliest=$timepicker$ latest=$timepicker$  Thanks   
Hello friends. We are in the process of moving the collection of o365 events which we currently do on an on-prem HF via "Splunk_TA_microsoft-cloudservices" to SplunkCloud IDM using "splunk_ta_o365".... See more...
Hello friends. We are in the process of moving the collection of o365 events which we currently do on an on-prem HF via "Splunk_TA_microsoft-cloudservices" to SplunkCloud IDM using "splunk_ta_o365". Using the same Client ID, Client Secret, and Tenant ID, we seem to be getting similar workloads:  Aip, AzureActiveDirectory, CRM, Exchange, MicrosoftForms, MicrosoftStream, MicrosoftTeams, OneDrive, PowerApps, PowerBI, PublicEndpoint, SecurityComplianceCenter, SharePoint, SkypeForBusiness, Yammer   But if we perform a comparison of number of events, we seem to get lower amount of data using the `splunk_ta_o365` in SplunkCloud versus the `Splunk_TA_microsoft-cloudservices` in on-prem.   What seems to be the problem?
I found that the format of a sourcetype had changed some time ago. Now I need to extract the data correctly for both cases.   2022-01-11 17:40:59.000, SEVERITY="123", DESCRIPTION="ooops" 2018-01-2... See more...
I found that the format of a sourcetype had changed some time ago. Now I need to extract the data correctly for both cases.   2022-01-11 17:40:59.000, SEVERITY="123", DESCRIPTION="ooops" 2018-01-24 16:35:05 SEVERITY="112", DESCRIPTION="blabla"   Extraction for the first type of entries works with this regex that was build with splunk field extraction   ^(?P<dt>[^,]+)[^"\n]*"(?P<SEVERITY>\d+)[^=\n]*="(?P<DESCRIPTION>[^"]+)   How can the regex be expanded to split either at "," or at the second space, if the comma is missing? An idea is to capture always at the second space and remove the comma or split before SEVERITY and remove the comma. I didn't get either working. You can find the regex at https://regex101.com/r/mxdAyx/1  Thanks
Hello, I would like to assign random new "unassigned" notables to a specific user. I wanted to accomplish this via a saved search but unfortunately it did not work, and the userI am trying to assig... See more...
Hello, I would like to assign random new "unassigned" notables to a specific user. I wanted to accomplish this via a saved search but unfortunately it did not work, and the userI am trying to assign to does actually exist in the enviroment when looking up the es_notable_events lookup which also has previous actions made on notables. | inputlookup es_notable_events | search owner="unassigned" | head 10 | eval owner="usertoassign" | outputlookup es_notable_events append=true key_field=owner Is there another way to do this? What am I doing wrong? Thanks, Regards,
Hi, Splunk search query to get data last two months data. need only every Friday data in the time range for 15 mins (i.e 08 AM to 08:15 AM every friday) . example: Date                     ... See more...
Hi, Splunk search query to get data last two months data. need only every Friday data in the time range for 15 mins (i.e 08 AM to 08:15 AM every friday) . example: Date                       fieldA 21/01/2022      value1 14/01/2022      value2 07/01/2022     value3 Can anyone pls suggest how can I achieve this?
I am in the middle of configuring a standalone Splunk installation.  I am getting confused about the different attributes that can be set for overall storage and per index.  It is a very small instal... See more...
I am in the middle of configuring a standalone Splunk installation.  I am getting confused about the different attributes that can be set for overall storage and per index.  It is a very small installation with only about 30 assets connected and about 2.3TB of storage to store data for a year.  I have the following configuration so far: frozenTimePeriodinSecs = 31536000 #365 days [volume:hotwarm] path = <directory to hotwarm location> maxVolumeDataSizeMB = 178176 #174GB [volume:cold] path = <directory to cold location> maxVolumeDataSizeMB = 1970176 #1924GB [network] homePath = volume:hotwarm/network/db coldPath = volume:cold/network/colddb thawedPath = $SPLUNK_DB/network/thaweddb [windows] homePath = volume:hotwarm/windows/db coldPath = volume:cold/windows/colddb thawedPath = $SPLUNK_DB/windows/thaweddb I'm not sure how to use the maxTotalDataSizeMB with maxVolumeDataSizeMB to keep from maxTotalDataSizeMB triggering a roll to frozen before the 365 days is up.  We currently do not have any idea how much data will be coming in.  Is it good practice to set maxTotalDataSizeMB for each index to the same size as maxVolumeDataSizeMB? I have seen this practice before...   And if so, is it the maxVolumeDataSize of the cold storage, or hot/warm/cold storage combined?
Hi, Can you please let me know  which Splunk enterprise version is more stable release to deploy. Thank you
Hello any ideas how can i check rdp attempts or connections in Splunk? many thanks  
Hello, What will be the best practices for the java. thanks. 
Hi  I need to do a post request with some params to a external rest endpoint which expects a SSL cert for authentication. If anyone has done anything like this , it would be great if you can share... See more...
Hi  I need to do a post request with some params to a external rest endpoint which expects a SSL cert for authentication. If anyone has done anything like this , it would be great if you can share the steps.  Thanks in advance !!
Hi  Please let me know if i upgrade from current version of  TA for Microsoft Windows Defender from 1.0.0 to 1.0.6 , will it cause any issues . Do i have to save the secret key and all .    Regard... See more...
Hi  Please let me know if i upgrade from current version of  TA for Microsoft Windows Defender from 1.0.0 to 1.0.6 , will it cause any issues . Do i have to save the secret key and all .    Regards    Rahul   
Since our last update to 8.2.2.1 the index _internal contains lots of ERROR messages where we cannot get any information about it's meaning: ERROR ILightWeightSearchStringParser [4392 SchedulerThre... See more...
Since our last update to 8.2.2.1 the index _internal contains lots of ERROR messages where we cannot get any information about it's meaning: ERROR ILightWeightSearchStringParser [4392 SchedulerThread] - still in inQueto=true Does anybody know this message and can give some information about it?   Thanks
Hi, all! Here are the sources that I want to contain at my search: - /appvol/wlp/DIVR01HK-AS01/applogs/appl.log - /appvol/wlp/DIVR01HK-AS01/applogs/appl.log.1 -  /appvol/wlp/DIVR01HK-AS01/applogs... See more...
Hi, all! Here are the sources that I want to contain at my search: - /appvol/wlp/DIVR01HK-AS01/applogs/appl.log - /appvol/wlp/DIVR01HK-AS01/applogs/appl.log.1 -  /appvol/wlp/DIVR01HK-AS01/applogs/appl.log.2 ... - /appvol/wlp/DIVR01HK-AS01/applogs/appl.log.50 How I could summarize those sources in a simple way in my Search command?  
Greetings!   How to create tickets in splunk and assign it to someone?   Thank you in advance!
Hello, I am trying to configure Splunk Connect for Kubernetes to capture a k8s cluster application logs. I have problems when configuring https connection to HEC. On the Heavy Forwarder, I have co... See more...
Hello, I am trying to configure Splunk Connect for Kubernetes to capture a k8s cluster application logs. I have problems when configuring https connection to HEC. On the Heavy Forwarder, I have configured a ServerCert, which has been signed by our Company Authority. Then, on Splunk Connect for Kubernetes Helm, if I configure https :           splunk: hec: # host is required and should be provided by user host: hostname.domain.com # token is required and should be provided by user token: MY-HEC-TOKEN # protocol has two options: "http" and "https", default is "https" # For self signed certificate leave this field blank protocol: https           When deploying, I see the following logs on Heavy Forwarder :            01-25-2022 09:37:16.729 +0100 WARN SSLCommon [1235867 HttpInputServerDataThread] - Received fatal SSL3 alert. ssl_state='SSLv3 read client key exchange A', alert_description='unknown CA'. 01-25-2022 09:37:16.729 +0100 WARN HttpListener [1235867 HttpInputServerDataThread] - Socket error from 10.8.199.195:55608 while idling: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.             I have to configure insecureSSL: true to get the connection working and see logs on Indexer. But, If I activate HTTPS connection, I do not want it to be insecure ^^.   I am a bit confused about Splunk Connect 4 Kubernetes configuration. Can I use :            splunk: # Configurations for HEC (HTTP Event Collector) hec: # The PEM-format CA certificate file. # NOTE: The content of the file itself should be used here, not the file path. # The file will be stored as a secret in kubernetes. caFile:           To configure ma Company CA ?   Or are keys clientCert, clientKey and CaFile only used for mTLS configuration ?   Thank you in advance for your answers. Regards. Nicolas.  
Hi All,  We are currently using CrowdStrike Falcon Event Streams Technical Add-On" in our instance. https://splunkbase.splunk.com/app/5082/ Recently received the alert about update of jQuery. Acc... See more...
Hi All,  We are currently using CrowdStrike Falcon Event Streams Technical Add-On" in our instance. https://splunkbase.splunk.com/app/5082/ Recently received the alert about update of jQuery. According to the Upgrade Readiness App, this Add-on does not support jQuery 3.5 so far. Does anyone know about the support schedule for this add-on? Regards,
I currently have a Universal Forwarder running on a linux syslog server with a bunch of file monitors in place such as: [monitor:///var/log/10.10.10.99/syslog.log]index=hphost_segment=3disabled=0 T... See more...
I currently have a Universal Forwarder running on a linux syslog server with a bunch of file monitors in place such as: [monitor:///var/log/10.10.10.99/syslog.log]index=hphost_segment=3disabled=0 The index that i'm using for my new file monitor stanzas is a newly created index, that i haven't used previously. I've created a couple of new deployment apps with the new file monitors and pushed them out to the UF on my syslog server. I can see other monitored files on the syslog server being forwarded into Splunk, however i'm not seeing my new files being monitored. I've reloaded the deploy-server to ensure that the configs are being pushed out. I have also run a "./splunk btool inputs list" command and I can see that it is listing my new configuration as a part of the aggregated inputs.conf. However i'm not seeing any events for these new file monitors being forwarded into Splunk. The new index is showing 0 events received. Is there a way to list events being outputted by the Universal Forwarder? Also is there a way to list events from my Universal Forwarder that are hitting the input queue on my Splunk indexer? Thanks,