All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to combine the events from 2 different indexes and display the results in a table, when there are no matching fields in the indexes. Please suggest.
Hi,  i am trying to search for host that are sending logs over the last 7 days. Anything more than 7 days i will like to exlcude out from my results.  Right now i am using this query and searching o... See more...
Hi,  i am trying to search for host that are sending logs over the last 7 days. Anything more than 7 days i will like to exlcude out from my results.  Right now i am using this query and searching over the last 7 days.  =================================================== | metadata type=hosts index=* | rename totalCount as Count firstTime as "First Event" lastTime as "Last Event" recentTime as "Last Update" host as "Hostname" | table Hostname Count "First Event" "Last Event" "Last Update" | fieldformat Count=tostring(Count, "commas") | fieldformat "First Event"=strftime('First Event', "%d-%m-%Y %k:%M") | fieldformat "Last Event"=strftime('Last Event', "%d-%m-%Y %k:%M") | fieldformat "Last Update"=strftime('Last Update', "%d-%m-%Y %k:%M") | sort by "Last Update" | reverse ================================================== This query give me what i wanted but towards the end of the results, those last updated time include those hosts which last send over few months ago.      Anybody can enlighten me what i should do for results only lasting last 7 days till 28 Jab 2022?      
I a trying to Extract the exception Name which is at the 4th line in log generated as below - <CS-1>2022-02-03T14:58:21.128+0100 ERROR org.flowable.job.service.impl.asyncexecutor.DefaultAsyncRunnabl... See more...
I a trying to Extract the exception Name which is at the 4th line in log generated as below - <CS-1>2022-02-03T14:58:21.128+0100 ERROR org.flowable.job.service.impl.asyncexecutor.DefaultAsyncRunnableExecutionExceptionHandler 77037 DefaultAsyncRunnableExecutionExceptionHandler.java:44 - [{user=system}] - Job JOB-2d21fa4f-84f8-11ec-9094-02425ecfb8fb failed org.flowable.common.engine.api.FlowableOptimisticLockingException: JobEntity [id=JOB-2d21fa4f-84f8-11ec-9094-02425ecfb8fb] was updated by another transaction concurrently at org.flowable.common.engine.impl.db.DbSqlSession.flushDeleteEntities(DbSqlSession.java:643) ~[flowable-engine-common-6.6.0.17.jar!/:6.6.0.17] I want to have the filed extraction of the Exception Name which is highlighted above in blue. - its position is 4th line and till the colon(:) I am trying to use this which does not work in splunk field extraction regex-  ^(.*\n){3}(?P<test_work_error>.+Exception:)  Please advise. Thanks in advance
Hello Splunkers!  Recently, I have installed splunkforwarder 8.2.1.  After installation, 2 errors are showing. 1. After installing splunkforwarder 8.2.1 on AIX(7100-05-04-1914) server. Every ti... See more...
Hello Splunkers!  Recently, I have installed splunkforwarder 8.2.1.  After installation, 2 errors are showing. 1. After installing splunkforwarder 8.2.1 on AIX(7100-05-04-1914) server. Every time I execute ./splunk command on the CLI,  that closed CLI window. The only command that doesn't close the CLI window is ./splunk status. What should I do to fix this problem? 2. When I was installing splunkforwarder 8.2.1 on Solaris (5.10 sn4v spray) server, it gives me a library error and cannot be installed. The error is ld.so.1: splunk : critical : libc.so.1 : version 'SUNWpublic' not found (required by file splunk) ld.so.1: splunk : critical : libc.so.1 : open filed : no such file or directory How can I install from this error? Thank you in advance.
Hello all,   I am trying to exclude an specific value within a field while retaining others. Can you please let me know.   Eg values: 1) /Server/Cpu/load/Login 2) /Server/Memory/usage 3)/Load/... See more...
Hello all,   I am trying to exclude an specific value within a field while retaining others. Can you please let me know.   Eg values: 1) /Server/Cpu/load/Login 2) /Server/Memory/usage 3)/Load/usage/value   These above are the values extracted form the event and I will have to remove only /Server value from the field while retaining all other values from the event. Expected values needed: 1) /Cpu/load/Login 2) /Memory/usage 3) /Load/usage/value   Please help in getting this.  
Greetings!!! How to updrade from 5.3.0  to SPlunk Enterprise Security version 7.0,   I am having splunk enterprise 7.2.6, Kindly advise & guide  me how can i upgrade it?  Thank you in advance!
Hi all, I am trying to call custom endpoint from dashboard JavaScript file on user interaction (This is a setup page). python_code.py     class TestAndSaveOrUpdateCredentials(PersistentServ... See more...
Hi all, I am trying to call custom endpoint from dashboard JavaScript file on user interaction (This is a setup page). python_code.py     class TestAndSaveOrUpdateCredentials(PersistentServerConnectionApplication): def __init__(self, command_line, command_arg): super(PersistentServerConnectionApplication, self).__init__() def handle(self, in_string): return { "payload":in_string, "status": 200 }     restmap.conf     [script:test_endpoint] match = /testing-123 script = python_code.py scripttype = persist handler = python_code.TestAndSaveOrUpdateCredentials passHttpHeaders = true output_modes = json passHttpCookies = true     web.conf     [expose:test_endpoint] methods = GET, POST pattern = testing-123     JavaScript     const appNamespace = { owner: "", # Tried with admin,nobady app: "", # Tried with app_name sharing: "global", # tried with 'app' } const http = new splunkjs.SplunkWebHttp(); const service = new splunkjs.Service( http, appNamespace, ); service.get("testing-123") //service.get("services/testing-123")         I am able to call localhost:8089/services/testing-123 from postman, but from JavaScript seeing this error     {"messages":[{"type":"ERROR","text":"JSON reply had no \"payload\" value"}]}       Please let me know where I am doing wrong Thanks.
Is it possible to set the Y-axis and X-axis to fixed values when displaying the OutLierChart chart?
Hi folks, What query can I use to sum up my field "viewer.Id" to see how many viewers we have between 01/22/2022 and 02/02/2022. I would like to see the count of Increment/decrement from my results ... See more...
Hi folks, What query can I use to sum up my field "viewer.Id" to see how many viewers we have between 01/22/2022 and 02/02/2022. I would like to see the count of Increment/decrement from my results and also in % by comparing it with different dates. Thanks Evans
Hello,  I need  a role that only can create users and roles. I selected capabilities as admin_all_objects, edit_user and edit_roles. I don´t have any problems to create users, but when I try to cre... See more...
Hello,  I need  a role that only can create users and roles. I selected capabilities as admin_all_objects, edit_user and edit_roles. I don´t have any problems to create users, but when I try to create roles from this role the button create is inactive preveting adding the new role. Any help is really apreciated Regards, TGMAna
I have an issue with my splunk forwarder.  Inside the inputs.conf, the interval is set to run at 5 9 * * * . So 09:05 daily if I did it correctly.  I restart splunk service as splunk.  The job will ... See more...
I have an issue with my splunk forwarder.  Inside the inputs.conf, the interval is set to run at 5 9 * * * . So 09:05 daily if I did it correctly.  I restart splunk service as splunk.  The job will run ONE time at 09:05. The only way I can get it to run as scheduled is if I put the interval in as seconds 86400.  Does anyone else have that problem?
So I'm trying to setup REST API calls with Add-on Builder and it requires two params: 'fromDate' and 'toDate'. So I ran into 2 problems: 1)  'toDate' in my case is time/date now (at the moment of AP... See more...
So I'm trying to setup REST API calls with Add-on Builder and it requires two params: 'fromDate' and 'toDate'. So I ran into 2 problems: 1)  'toDate' in my case is time/date now (at the moment of API call). Is it possible to set up this param to something like Date.now() in JavaScript?  2) 'fromDate' should be a checkpoint taken from the last record in the last response. The problem is that in the response this timestamp is in UNIX format. And for request, it should be UTC %d/%m/%y%H%M. How can I convert UNIX into UTC? + can I add an additional second to this extracted timestamp so my data won't overlap? 
Hi All, my solution foresees a heavy forwarder that sends data to an indexer, in the transforms.conf file I have a regex, which allows me to filter through string only the lines I need regex exampl... See more...
Hi All, my solution foresees a heavy forwarder that sends data to an indexer, in the transforms.conf file I have a regex, which allows me to filter through string only the lines I need regex example REGEX = ^.*(?:SIMONE|MARCO).* file.log example to monitor xxxxx|xxxxx|xxxxx|SIMONE|xxxxx|xxxxx|xxxxx xxxxx|xxxxx|xxxxx|VALERIO|xxxxx|xxxxx|xxxxx xxxxx|xxxxx|xxxxx|SILVIA|xxxxx|xxxxx|xxxxx xxxxx|xxxxx|xxxxx|MARCO|xxxxx|xxxxx|xxxxx I am acknowledging these errors ERROR Regex - Failed in pcre_exec: Error PCRE_ERROR_MATCHLIMIT for regex: WARN regexExtractionProcessor - Regular expression for stanza xxxxx exceeded configured PCRE match limit. One or more fields might not have their values extracted, which can lead to incorrect search results. Fix the regular expression to improve search performance or increase the MATCH_LIMIT in props.conf to include the missing field extractions. I wanted to know if there is a different way to filter the data to send without regex? Best Regards, Simone
I would like to group URL fields and get a total count.  When  I do this:       index=example source=example_example dest="*.amazonaws.com" OR dest="*.amazoncognito.com" OR dest="slack.com" OR d... See more...
I would like to group URL fields and get a total count.  When  I do this:       index=example source=example_example dest="*.amazonaws.com" OR dest="*.amazoncognito.com" OR dest="slack.com" OR dest="*.docker.io" | dedup dest | table dest | stats count by dest       the output is this: dest count 352532535.abc.def.eu-xxxxx-1.amazonaws.com 1 abc.auth.xx-aaaa-1.amazoncognito.com 1 aaa1-stage-login-abcdef.auth.xx-abcd-1.amazoncognito.com 1 346345452.abc.def.us-abcd-2.amazonaws.com 1 autoscaling.xx-east-4.amazonaws.com 1 slack.com 1 registry-1.docker.io 1 auth.docker.io 1   I wanted to group them by similar patterns like this: gruopedURL count .amazonaws.com 3 .amazoncognito.com 2 slack.com 1 .docker.io 2   I've tried other possible queries based on some postings here, but no luck. It was mostly after the '.com'
I was investigating bundle sizes coming from one of my SHC and came across several apps in the bundle that had the following in the lookup directory. Qualys is just one example there are several othe... See more...
I was investigating bundle sizes coming from one of my SHC and came across several apps in the bundle that had the following in the lookup directory. Qualys is just one example there are several other apps where index.default and index.alive are present. Can someone tell me what these are and what they're doing in a knowledge bundle. qualys_kb.csv_1534282613.index.default qualys_kb.csv_1643803241.755269.cs.index.alive
How do I set the visibility of panels in Dashboard Studio?   I was going to create a multi select option but how can I tie visibility of a panel to the selection being made?
I work in a large Splunk, ES clustered environment. Should the KVSTORES only be running on the SHs? Looks like after upg. to 8.2.4, we have to use WiredTiger kvstore. Do you have any input on what ti... See more...
I work in a large Splunk, ES clustered environment. Should the KVSTORES only be running on the SHs? Looks like after upg. to 8.2.4, we have to use WiredTiger kvstore. Do you have any input on what tier should the kvstores need to be running on? Any input on using WiredTiger kvstore please. I appreciate your response in advance.
It looks like this particular example (Table with Data Bars) does not work for me. Is there anything particular I should check ? Is it a bug ? I'm using Splunk 8.2.1 and the latest Dashboard Examples... See more...
It looks like this particular example (Table with Data Bars) does not work for me. Is there anything particular I should check ? Is it a bug ? I'm using Splunk 8.2.1 and the latest Dashboard Examples app (8.2.2)
I recently started trying to set up some field extracts for a few of our events.  In this case, the logs are pipe delimited and contain only a few segments.  What I've found that most of these attemp... See more...
I recently started trying to set up some field extracts for a few of our events.  In this case, the logs are pipe delimited and contain only a few segments.  What I've found that most of these attempts result in an error with rex regarding limits in limits.conf. For example: this record: 2022-02-03 11:45:21,732 |xxxxxxxxxxxxxxx.xxxxxx.com~220130042312|<== conn[SSL/TLS]=274107 op=26810 MsgID=26810 SearchResult {resultCode=0, matchedDN=null, errorMessage=null} ### nEntries=1 ### etime=3 ### When I attempt to use a pipe delimited field extract (for testing) the result is this error: When I toss this regex (from the error) into regex101 (https://regex101.com/r/IswlNh/1) it tells me it requires 2473 steps, which is well above the default 1000 for depth_limit...  How is it that an event with 4 segments delimited by pipe is so bad? I realize there are 2 limits (depth_count/match_count) in play here and I can increase them, but nowhere can I find recommended values to use as a sanity check.  I also realize I can optimize the regex, but as I am setting this up via UI using the delimited option, I don't have access to the regex at creation time.  Not to mention, many of my users are using this option as they are not regex gurus... So my big challenge/question is...  Where do I go from here?  My users are going to use this delimited options, which evidently generates some seriously inefficient regex under the covers.  Do I increase my limit(s), and if so what is a sane/safe value?  Is there something I'm missing? Thanks!
Hello Splunkers, I have a question with building Splunk Apps with Dashboard Studio. My question has to do with portability of the Splunk app. Given that the traditional way of building Splunk ap... See more...
Hello Splunkers, I have a question with building Splunk Apps with Dashboard Studio. My question has to do with portability of the Splunk app. Given that the traditional way of building Splunk apps via Simple XML allows you to save images in the Static folder inside your Splunk app.  So whenever you download the App on Splunkbase you have everything you need. Unlike Dashboard Studio that saves your images & Icons in the KV store.  With this in mind, how would you package your Splunk App that uses dashboard studio without losing any pictures or icons?  Thank you, Marco