All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to monitor our Unix boxes (RedHat) without success. I deployed the universal forwarder following the instructions (https://docs.splunk.com/Documentation/Forwarder/8.2.4/Forwarder/Co... See more...
Hi, I am trying to monitor our Unix boxes (RedHat) without success. I deployed the universal forwarder following the instructions (https://docs.splunk.com/Documentation/Forwarder/8.2.4/Forwarder/Configuretheuniversalforwarder). I installed the RPM and register the deployment server  and the receiving indexer. ./splunk add forward-server <host name or ip address>:<listening port> ./splunk set deploy-poll <host name or ip address>:<management port> I correctly see the new linux box in slpunk web in the forwarder management. Then, I installed the Addon for Unix following the instructions too https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Enabledataandscriptedinputs I copied the addon to the folder C:\Program Files\Splunk\etc\deployment-apps and deployed it to the linux box using the Splunk Web. (I created the server classed and assign the client and the TA_nix ) Then , I logged into the linux box and I enabled the data input using the command line  https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Enabledataandscriptedinputs     /splunk cmd sh $SPLUNK_HOME/etc/apps/Splunk_TA_nix/bin/setup.sh --enable-all     I restarted the splunk forwarder as indicated in the instructions. Here is where I get lost... I dont see any mention to which index should the events go. I dont see any new index created by the addon so I created the index "os" myself. is this correct? I also added the index=os to all stanzas in the local/inputs.conf the events started to appear in the index "os". is this the way to do it? some other actions that I missed? Thanks a lot
Hell all,   In my organzation we are trying to collect logs from all Laptop/Desktop into Splunk. I read somewhere that we can use logs collected from AV agents instead of installing universal forwa... See more...
Hell all,   In my organzation we are trying to collect logs from all Laptop/Desktop into Splunk. I read somewhere that we can use logs collected from AV agents instead of installing universal forwarders. I We have CrowdStrike agents on all our endpoint devices.    s this right method? If so, what is the use cases where we may have to install UF on endpoint devices.   Thank you
every night my Server Crashes saying out of memory error however I have more than enough memory. In event logs get a : Unable to allocate dynamic memory buffer ,and Faulting application name: splunk... See more...
every night my Server Crashes saying out of memory error however I have more than enough memory. In event logs get a : Unable to allocate dynamic memory buffer ,and Faulting application name: splunkd  faulting module name: Ucrtbase.dll   Systems log has a low memory condition  looks like Splunk has a memory leak so how do i find out what is causing it ?      
Upgrading from Splunk 7.3.9 to Versions before 8.0.8 or 8.1.1 will fail. During the install process the installer will error with the following messages relating to the files: libxml2.dll libeay32... See more...
Upgrading from Splunk 7.3.9 to Versions before 8.0.8 or 8.1.1 will fail. During the install process the installer will error with the following messages relating to the files: libxml2.dll libeay32.dll ssleay32.dll                         After dismissing these messages, the installer rolls back and reverts to the previously installed version 7.3.9 This appears related to the dates on these files that the installer does not handle correctly and does not overwrite. At the end of the failed install (before rollback) these 3 files are missing from the $SPLUNK_HOME/bin folder I presume the installer removes (backs up) the original files and during the install process validates that the files to be installed are newer. In the case of these 3 files, the situation is not handled correctly and the installer fails to correctly remedy the situation.  
Hello experts,  If I have only IP address of  hosts from a search, how do I look for its hostname from a lookup table? Let say, I search, index=network_device.    I have a lookup table that contai... See more...
Hello experts,  If I have only IP address of  hosts from a search, how do I look for its hostname from a lookup table? Let say, I search, index=network_device.    I have a lookup table that contains IP address and host names of all assets.
Hello Splunkers,  Is it possible to go back to the classic experience once you upgraded to victoria experience?
Hello could you please help me to have better understanding of UF. Can we still use Splunk UF even after the end date (license )  i.e Splunk agent still forward the data to QRadar?
Hello, i am aware that there already is a Question from way back called: "finding peak and low times from timechart" However in that solution i only can get max and min values overall.   i tried... See more...
Hello, i am aware that there already is a Question from way back called: "finding peak and low times from timechart" However in that solution i only can get max and min values overall.   i tried to adapt the solution for my issue. Here it goes... I have multiple customers and want to find peaks for everyone of them. Whilst the solution: index=web GET OR POST | timechart span=1h count | eventstats max(count) as high, min(count) as low | where (count=low OR count=high) | fields _time, count works perfectly for overall peaks i struggle to get it flying with an "by" command for customers...so something like: | timechart span=1hour count  by customer | eventstats max(count) as high, min(count) as low by customer at this point there however is no field "count" anymore Kind regards, Mike  
Hi, I've created an alert for one of my main API service and how it works is, it runs every 30 mins, looks into failure rate and failed requests and then based on the threshold which is (failedReques... See more...
Hi, I've created an alert for one of my main API service and how it works is, it runs every 30 mins, looks into failure rate and failed requests and then based on the threshold which is (failedRequest > 200 AND failurerate > 10%), it triggers the alert and raises a incident. Now there are times when during those 30 mins, there is a short blip of 5 mins with large number of errors and for the rest of the time it was normal. Now in that case as well the alert gets fired because it meets the threshold. How can i avoid that? Is it possible to look for number of errors and if they are consistent for like 20 or 30 mins and if they are then trigger the alert? How can i achieve that? Here is my sample query - Let me know if anyone can advice on this. It will be immensely helpful. index=myapp_prod source=myapp "message.logPoint"=OUTGOING_RESPONSE (message.httpResponseCode=50* OR message.httpResponseCode=20*) | rename message.serviceName as serviceName message.httpResponseCode as httpResponseCode | where(serviceName LIKE "my-service") | stats count as totalrequests count(eval(httpResponseCode=200)) as successrequest count(eval(httpResponseCode=500 OR httpResponseCode=502 OR httpResponseCode=503)) as failedrequest | eval Total = successrequest + failedrequest | eval failureRatePercentage = round(((failedrequest/Total) * 100),2) | where failureRatePercentage > 10 AND failedrequest > 200  
Good Morning, I've followed guides/forums and steps on this site but still cant get my blacklists to work at all. The situation is that I've set up Splunk Alert Monitor dashboard and one of the aler... See more...
Good Morning, I've followed guides/forums and steps on this site but still cant get my blacklists to work at all. The situation is that I've set up Splunk Alert Monitor dashboard and one of the alerts is new process starts, the splunk forwarder is causing hundreds of alerts on this so I want to blacklist it. Firstly could someone please confirm which inputs.conf to edit as there are multiple, secondly is this order correct?  [WinEventLog://Security] disabled=0 current_only=1 blacklist = 4689,5158  i.e. is the blacklist option in the right place? There are a few other lines on the inputs.conf I've found, like oldest first. Finally what string will actually work and stop me seeing all processes started by Splunk?   Thank you  in advance. 
Hello, we would like to use the rising input mode for a dbconnect (2.x) query. Unfortunately, the destination table is an Oracle table and it only has a date field that could be used as rising colu... See more...
Hello, we would like to use the rising input mode for a dbconnect (2.x) query. Unfortunately, the destination table is an Oracle table and it only has a date field that could be used as rising column, but, if I'm right with the documentation, this may lead to duplicates. Is this correct? Is there any other way to solve this problem without modifying the source table? Thanks
I need to add an export button on the top of a dashboard, simply one that recall the built in function of splunk that let you select the name,  type of file and number of event saved. I found the el... See more...
I need to add an export button on the top of a dashboard, simply one that recall the built in function of splunk that let you select the name,  type of file and number of event saved. I found the element that can recall that function, by typing it in the console i can call it but I have problems to integrate it inside a button i can put in my dashboard. Do anyone have any idea on how to do it? The function to call it is:   document.getElementsByClassName("btn-pill export")[0].click()     Any help will be appreciated. Thanks
How to combine the events from 2 different indexes and display the results in a table, when there are no matching fields in the indexes. Please suggest.
Hi,  i am trying to search for host that are sending logs over the last 7 days. Anything more than 7 days i will like to exlcude out from my results.  Right now i am using this query and searching o... See more...
Hi,  i am trying to search for host that are sending logs over the last 7 days. Anything more than 7 days i will like to exlcude out from my results.  Right now i am using this query and searching over the last 7 days.  =================================================== | metadata type=hosts index=* | rename totalCount as Count firstTime as "First Event" lastTime as "Last Event" recentTime as "Last Update" host as "Hostname" | table Hostname Count "First Event" "Last Event" "Last Update" | fieldformat Count=tostring(Count, "commas") | fieldformat "First Event"=strftime('First Event', "%d-%m-%Y %k:%M") | fieldformat "Last Event"=strftime('Last Event', "%d-%m-%Y %k:%M") | fieldformat "Last Update"=strftime('Last Update', "%d-%m-%Y %k:%M") | sort by "Last Update" | reverse ================================================== This query give me what i wanted but towards the end of the results, those last updated time include those hosts which last send over few months ago.      Anybody can enlighten me what i should do for results only lasting last 7 days till 28 Jab 2022?      
I a trying to Extract the exception Name which is at the 4th line in log generated as below - <CS-1>2022-02-03T14:58:21.128+0100 ERROR org.flowable.job.service.impl.asyncexecutor.DefaultAsyncRunnabl... See more...
I a trying to Extract the exception Name which is at the 4th line in log generated as below - <CS-1>2022-02-03T14:58:21.128+0100 ERROR org.flowable.job.service.impl.asyncexecutor.DefaultAsyncRunnableExecutionExceptionHandler 77037 DefaultAsyncRunnableExecutionExceptionHandler.java:44 - [{user=system}] - Job JOB-2d21fa4f-84f8-11ec-9094-02425ecfb8fb failed org.flowable.common.engine.api.FlowableOptimisticLockingException: JobEntity [id=JOB-2d21fa4f-84f8-11ec-9094-02425ecfb8fb] was updated by another transaction concurrently at org.flowable.common.engine.impl.db.DbSqlSession.flushDeleteEntities(DbSqlSession.java:643) ~[flowable-engine-common-6.6.0.17.jar!/:6.6.0.17] I want to have the filed extraction of the Exception Name which is highlighted above in blue. - its position is 4th line and till the colon(:) I am trying to use this which does not work in splunk field extraction regex-  ^(.*\n){3}(?P<test_work_error>.+Exception:)  Please advise. Thanks in advance
Hello Splunkers!  Recently, I have installed splunkforwarder 8.2.1.  After installation, 2 errors are showing. 1. After installing splunkforwarder 8.2.1 on AIX(7100-05-04-1914) server. Every ti... See more...
Hello Splunkers!  Recently, I have installed splunkforwarder 8.2.1.  After installation, 2 errors are showing. 1. After installing splunkforwarder 8.2.1 on AIX(7100-05-04-1914) server. Every time I execute ./splunk command on the CLI,  that closed CLI window. The only command that doesn't close the CLI window is ./splunk status. What should I do to fix this problem? 2. When I was installing splunkforwarder 8.2.1 on Solaris (5.10 sn4v spray) server, it gives me a library error and cannot be installed. The error is ld.so.1: splunk : critical : libc.so.1 : version 'SUNWpublic' not found (required by file splunk) ld.so.1: splunk : critical : libc.so.1 : open filed : no such file or directory How can I install from this error? Thank you in advance.
Hello all,   I am trying to exclude an specific value within a field while retaining others. Can you please let me know.   Eg values: 1) /Server/Cpu/load/Login 2) /Server/Memory/usage 3)/Load/... See more...
Hello all,   I am trying to exclude an specific value within a field while retaining others. Can you please let me know.   Eg values: 1) /Server/Cpu/load/Login 2) /Server/Memory/usage 3)/Load/usage/value   These above are the values extracted form the event and I will have to remove only /Server value from the field while retaining all other values from the event. Expected values needed: 1) /Cpu/load/Login 2) /Memory/usage 3) /Load/usage/value   Please help in getting this.  
Greetings!!! How to updrade from 5.3.0  to SPlunk Enterprise Security version 7.0,   I am having splunk enterprise 7.2.6, Kindly advise & guide  me how can i upgrade it?  Thank you in advance!
Hi all, I am trying to call custom endpoint from dashboard JavaScript file on user interaction (This is a setup page). python_code.py     class TestAndSaveOrUpdateCredentials(PersistentServ... See more...
Hi all, I am trying to call custom endpoint from dashboard JavaScript file on user interaction (This is a setup page). python_code.py     class TestAndSaveOrUpdateCredentials(PersistentServerConnectionApplication): def __init__(self, command_line, command_arg): super(PersistentServerConnectionApplication, self).__init__() def handle(self, in_string): return { "payload":in_string, "status": 200 }     restmap.conf     [script:test_endpoint] match = /testing-123 script = python_code.py scripttype = persist handler = python_code.TestAndSaveOrUpdateCredentials passHttpHeaders = true output_modes = json passHttpCookies = true     web.conf     [expose:test_endpoint] methods = GET, POST pattern = testing-123     JavaScript     const appNamespace = { owner: "", # Tried with admin,nobady app: "", # Tried with app_name sharing: "global", # tried with 'app' } const http = new splunkjs.SplunkWebHttp(); const service = new splunkjs.Service( http, appNamespace, ); service.get("testing-123") //service.get("services/testing-123")         I am able to call localhost:8089/services/testing-123 from postman, but from JavaScript seeing this error     {"messages":[{"type":"ERROR","text":"JSON reply had no \"payload\" value"}]}       Please let me know where I am doing wrong Thanks.
Is it possible to set the Y-axis and X-axis to fixed values when displaying the OutLierChart chart?