All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, We been having issues with a CSV that is open for a long period of time while a script writes to it no my idea, but I can get the System Admins to budge.  I was using a normal monitor and p... See more...
Hi All, We been having issues with a CSV that is open for a long period of time while a script writes to it no my idea, but I can get the System Admins to budge.  I was using a normal monitor and props to do Field Extractions at index time.  I moved to MonitorNoHandle and it doesn't seem to do field extractions at index time, is this a known thing?  I know I can fix it by doing a Search time field extraction which I am prob going to do, but I can't see it documented anywhere that this is a known thing.  Cheers  
Hello all, I've a problem in Splunk Enterprise 7.3 when I want to Enable TLS for Mail delivery. Problem: When I activate email security to TLS (Server settings -> Email settings -> Enable TLS) the... See more...
Hello all, I've a problem in Splunk Enterprise 7.3 when I want to Enable TLS for Mail delivery. Problem: When I activate email security to TLS (Server settings -> Email settings -> Enable TLS) the email delivery is not working anymore. The SMTP server connection is working (server:port is provided) when I set Email Security to "none". The logs on the SMTP server have the following error: smtpd[252494]: SSL_accept error from <splunk_server> [xx.xxx.xx.xxx]: -1 smtpd[252494]: warning: TLS library problem: 252494:error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher:s3_srvr.c:1427: smtpd[252494]: lost connection after STARTTLS from <splunk_server> [xx.xxx.xx.xxx] smtpd[252494]: disconnect from <splunk_server> [xx.xxx.xx.xxx], message count 0 Looks like a problem with the used ciphers. I've checked alertaction.conf in splunk. The following standard settings are set. sslVersions = tls1.2 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 Do you have any ideas, how to solve this or where to look further? Thanks and many Regards Michael
Hi All, We are planning to build on separate Linux server for DMC and we have below components. Search Head cluster - 2 search head cluster member  Deployer/Cluster Master/License master - 1 serve... See more...
Hi All, We are planning to build on separate Linux server for DMC and we have below components. Search Head cluster - 2 search head cluster member  Deployer/Cluster Master/License master - 1 server Index cluster - 3 Indexer Deployment Server - 1server What are hardware requirement for DMC and configuration steps to monitor the rest of components(7 servers) in DMC. Please share us the plan. Thank You!   With Regards, Santhana Bharathi      
Dear Sir, I would like to set a token in order to be able modify -1h treshold. | tstats latest(_time) as latest where index=* earliest=-24h by host | eval recent = if(latest > relative_time(now(),"... See more...
Dear Sir, I would like to set a token in order to be able modify -1h treshold. | tstats latest(_time) as latest where index=* earliest=-24h by host | eval recent = if(latest > relative_time(now(),"-1h"),1,0), realLatest = strftime(latest,"%c") | eval data=case(recent=="0","No Data",recent=="1","OK") | table host,realLatest,data My earliest -24h token works fine, but I would like to set token alo for "-1h" relative time in order to be able modify treshold. Can you help ? Thank you in advance. regards, Stives
Hi Can anyone help me in understanding the errors im getting in the application aws addon, i have configured the inputs and checked everything from aws, everything looks good, but im getting an erro... See more...
Hi Can anyone help me in understanding the errors im getting in the application aws addon, i have configured the inputs and checked everything from aws, everything looks good, but im getting an errors as  
Hey guys, I bought a Splunk Enterprise software several years ago with perpetual License. I'm wondering which License Agreement text do I need to follow? The one that was located on my server durin... See more...
Hey guys, I bought a Splunk Enterprise software several years ago with perpetual License. I'm wondering which License Agreement text do I need to follow? The one that was located on my server during the installation (/opt/splunk/) or the online version on Splunk web site?
Hi , I have following data coming into splunk in one event and i want these event to be formatted in proper splunk table.   200 - OK  -------------------------------------------------------------... See more...
Hi , I have following data coming into splunk in one event and i want these event to be formatted in proper splunk table.   200 - OK  --------------------------------------------------------------------------------------------- | LM      | GRP    |    SR  | ID     | STATE  | SDL | QUEUE | SK | FAIL | |-------------------------------------------------------------------------------------------| | gpp1 | GROUP | hv | 231 | START | --- | HV | 15272 | 1 | | gpp2 | GROUP | hv | 233 | START | --- | HV | 15226 | 2 | | gpp2 | GROUP | hv | 234 | START | --- | HV | 11546 | 2 | | gpp1 | GROUP | hq | 240 | STOP   | --- |        | 0             | 1 | | gpp2 | GROUP | hq | 242 | START | --- |        | 0             | 1 |   So i want to create fields with above table headers like LM,GRP,ID,STATE,etc. and these fields should consist all the values present underneath it I tried extracting the fields and also used multikv command but no luck.
We have some log files with name like this: logs_2020-06-30.logs. A sample events looks like this:   2020-07-01 12:01:55.123 something something something something something 12:01:55.... See more...
We have some log files with name like this: logs_2020-06-30.logs. A sample events looks like this:   2020-07-01 12:01:55.123 something something something something something 12:01:55.124 something something 12:01:55.125 something something 2020-07-01 12:02:57.234 something2 something2 something2 something2 something2 12:02:57.235 something2 something2 12:02:57.236 something2 something2     We are breaking the events like this:   2020-07-01 12:01:55.123 something something something something something 12:01:55.124 something something 12:01:55.125 something something     As we can see, some of the events have a timestamp without a date.  Currently Splunk is using the date from the filename.  In this case, 2020-06-30.    We don't want to use the DATE provided in the filename, as very often the log files are created in the previous date. And we  cannot use DATETIME_CONFIG=CURRENT either, as this is not quite accurate.  Is there a way to use the current date for these events?    Many thanks. S
Hello, I have a field that contains the string below.  a) There can be fewer/more than the 4 events listed below.   b) Value of the events will be different. (event=aa)(event=bb)(event=cc)(event=... See more...
Hello, I have a field that contains the string below.  a) There can be fewer/more than the 4 events listed below.   b) Value of the events will be different. (event=aa)(event=bb)(event=cc)(event=normal)   ===================================================== 1) How can I create a new field events that equals "aa,bb,cc,normal"? 2) Is there a way to exclude the normal event?  So field events = "aa,bb,cc" only?  3) Is there a way to make it list like so I can filter on these events values?  (ie - potentially count # of events with aa or cc or (aa + cc)?)  4) Is there a way to count the events returned in the field?   Thank you!
Anyone come up with a custom sourcetype for Genesys Application logs. ? 
Hi, I am running Splunk Enterprise version 8.0.3 on a Windows 10 virtual machine. After installing Splunk Security Essentials (version 3.1.1), I couldn't get the Home page of the App (localhost:8000... See more...
Hi, I am running Splunk Enterprise version 8.0.3 on a Windows 10 virtual machine. After installing Splunk Security Essentials (version 3.1.1), I couldn't get the Home page of the App (localhost:8000/en-US/app/Splunk_Security_Essentials/home) to load. I've tried restarting the host and bumping Splunk, but the page still isn't responding. What could be the issue?  Thank you!
Hi Splunk Team Why is the age of the data larger than the frozenTimePeriodInSecs time without being deleted My config index is as follows frozenTimePeriodInSecs = 38880000 Thanks  
Can anyone tell me how I would replace entire strings if they contain partial strings. As a basic example, in my search results, if a URL contains the word "homework", I would like to replace the ent... See more...
Can anyone tell me how I would replace entire strings if they contain partial strings. As a basic example, in my search results, if a URL contains the word "homework", I would like to replace the entire URL with just "Homework", if it contains "learn", then "Learning" and so on. I have tried the search below a number of ways and can't seem to get it to work the way I need. | eval domain = if(cs_host = "*homework*", "homework", if(cs_host = "*learn*", "learning",cs_host)) Domain            Count Homework         2 Learning             5 etc Thanks
We have a field called number and the field number has both alpha and numeric values like "number=AVAILABLE=25 USD;" and the field number also has multi values like number=" CREDIT_PAYMENT=200.22 USD... See more...
We have a field called number and the field number has both alpha and numeric values like "number=AVAILABLE=25 USD;" and the field number also has multi values like number=" CREDIT_PAYMENT=200.22 USD; DEBIT_PAYMENT=500.10 USD;" also it has null values like number=null I want to extract all dollar amounts like 25 and 200.22 and 500.10 and create a new field. How can I achieve this? Please help  
My base query: index=... sourcecode=...  |  timechart span=1m count as number by name useother=f    In the result I have something like: _time NameValue1 NameValue2 NameValue3   0 0... See more...
My base query: index=... sourcecode=...  |  timechart span=1m count as number by name useother=f    In the result I have something like: _time NameValue1 NameValue2 NameValue3   0 0 0   23 0 0   26 30 40   55 77 0   0 100 3   It is not known how many different values of `name` we'll get, though the number is not high. For each value of `name` I need to count how many time-slots (_time) have  0 values. If it was only one value I would've done something like | search count < 1 | stats count If I could somehow to split / unzip each name value with its count into their own events and append them together, then I could use the above approach with `count by name`. What would be a workable approach here?
Events stream has ID field in every record.   There is a lookup table with a small subset of IDs. The task is to calculate the total number of occurrences for each ID from the lookup table for ... See more...
Events stream has ID field in every record.   There is a lookup table with a small subset of IDs. The task is to calculate the total number of occurrences for each ID from the lookup table for every 15 min. It is possible that certain IDs from the table will not be found.  In such cases they should still be included in the result with the count of zero. SQL version: SELECT ID, COUNT(ID)   FROM Events e RIGHT JOIN Lookup l ON l.ID=e.ID GROUP BY I.ID  What would be a good Splunk way to achieve the same?
My architecture is  Splunk Cloud and Splunk Enterprise - search heads and indexers I have a onprem Heavy forwarder. I want to try out the Splunk add-on for Microsoft o365 app.  Would it be recomm... See more...
My architecture is  Splunk Cloud and Splunk Enterprise - search heads and indexers I have a onprem Heavy forwarder. I want to try out the Splunk add-on for Microsoft o365 app.  Would it be recommended to install on the Heavy forwarder and have that reach out to o365 to retrieve the audit logs and then send it up to splunk cloud?  Or can I have splunk cloud directly connect to my O365 tenant? 
There doesn't seem to be a lot of documentation or discussions online which cover the setup of an intermediate, heavy forwarder. We need this for the following reasons: * to scrub/anonymize perso... See more...
There doesn't seem to be a lot of documentation or discussions online which cover the setup of an intermediate, heavy forwarder. We need this for the following reasons: * to scrub/anonymize personal information from data coming from universal forwarders * to reduce load on indexing server, whose parsing queues are consistently full Here is the deployment: [uf] > [hf] > [indexer] Does anybody have example .conf files that would support this? So far, mine look as such: Universal forwarder's outputs.conf: [tcpout] defaultGroup = pspr-heavy-forwarder [tcpout:pspr-heavy-forwarder] disabled = false server = 192.168.60.213:9997 Heavy forwarder's outputs.conf: [tcpout] defaultGroup = central-indexer indexAndForward = false sendCookedData = true useACK = true [tcpout:central-indexer] disabled = false server = 192.168.60.211:9997 Indexer's inputs.conf: [default] queue = indexQueue I've directed all universal forwarders to send to the intermediate forwarder, but the main indexer's still showing saturated queues. Local monitoring is limited to Splunk's own logs. Is there a way I can view what exactly is going into these queues, so I know where to chase the problem?
Hi , Currently i am running splunk version 7.3.3.  Its running on RHEL 6.10 with Python 2.7 in OS level. I got a notice to upgrade the Python verison to 3.6. I saw and read somewhere that splunk has ... See more...
Hi , Currently i am running splunk version 7.3.3.  Its running on RHEL 6.10 with Python 2.7 in OS level. I got a notice to upgrade the Python verison to 3.6. I saw and read somewhere that splunk has its own python library. So my question is if i upgrade the Python version of my OS to 3.6 , Will it break my splunk infrastructure which is running on 7.3.3. ??
Estou com este comando index = raw_maximo GR_RESP = STATUS "OPERACAO MAINFRAME"! = Cancelado | contagem de estatísticas (INCIDENTE) por GR_RESP poderia me ajudar nessa?