All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Currently I am showing 1 datapoint per column with below query: application="my-app" "*test-path*" | rename test-path as path | eval result=case((path == "/test-data/test/data"), "Total cou... See more...
Hi, Currently I am showing 1 datapoint per column with below query: application="my-app" "*test-path*" | rename test-path as path | eval result=case((path == "/test-data/test/data"), "Total count" ) | timechart span=1d count | eval day=strftime(_time,"%d/%m") | fields day, count but I want to show 3 data for each daily column, I am trying below application="my-app" "*test-path*" | rename test-path as path | eval result=case((path == "/test-data/test/data"), "Total count" , (path == "/test/test2-mydata/order"), "Total order ) | timechart span=1d count | eval day=strftime(_time,"%d/%m") | fields day, count but not working
I have following data email|country|license aa|HK|365E1 bb|US|365E2 cc|HK|non-office dd|HK|non-office ee|UK|non-office I would like to got bar chart that values of adopted (365E1+365E2) and n... See more...
I have following data email|country|license aa|HK|365E1 bb|US|365E2 cc|HK|non-office dd|HK|non-office ee|UK|non-office I would like to got bar chart that values of adopted (365E1+365E2) and non-adopted and count by country base_search | chart dc("email") AS Count over country by license
I just recently completed the Phantom Admin and Playbook Development training and am in the process of using what I've learned to setup Phantom to be the SOAR platform for notable events generated in... See more...
I just recently completed the Phantom Admin and Playbook Development training and am in the process of using what I've learned to setup Phantom to be the SOAR platform for notable events generated in ES. I'm having problems getting the Update Event action in Phantom's builtin Splunk app to update the status of the ES Notable event after it has been pushed to Phantom as a new container. Here are the description of the use case and the issue: After experimenting with the various methods of getting Notable Events to Phantom, I've settled on the Event Forwarding option to "push" notable events to Phantom. I'm having problems getting the playbook to update Notable Event status on ES, and I'm wondering if anyone can help me debug this playbook? FYI, for the lab environment, I'm using Splunk 8.0.1, ES 6.1.1, Phantom Add-On 3.0.5 and Phantom Community Edition 4.8.24304; all are the latest versions as of 5/25/20. My use case for the playbook is as follows: 1 - ES Correlation Search creates notable Events 2 - Phantom Add-On for Splunk's Event Forwarding enabled to forward new notable events to Phantom Because I need to have Phantom update the Notable Events later, the SPL used by Event Forwarding process have the following additional field mappings defined: (Showing the Splunk fields mapped to the custom fields I created in Phantom) event_id -> notableEventId severity -> notableSeverity urgency -> notableUrgency 4 - New containers created in Phantom with the above 3 fields as artifacts 5 - As new containers are created using this method, I want to use a playbook to update the Severity to match the Urgency from the notable event because Event Forwarding hard codes the severity. I also want the playbook to update the original Notable Event with quick comment and change the Notable status to "In Progress". The playbook's flow is as follows: 1 - Filter block #1: artifact:*.cef.notableEventId != blank 2 - Filter block #2: artifact:*.cef.notableUrgency != blank 3 - Decision block with 5 outputs for each of the 5 Notable Urgency values For each urgency: 4 - API block to change the container severity to match the Urgency value 5 - Format block to create a comment to the Notable Events 6 - API block calling "update event" action to update the ES Notable event using the following: event_ids = artifact:*.cef.notableEventId owner = unassigned status = in progress integer_status = leave blank urgency = artifact:*.cef.notableUrgency comment = format_1:formatted_data.* I tested the playbook and the debugger showed all the blocks functioned as expected except the last API block where it returns the following error and a FAILED status: May 25 2020 19:04:46 GMT-0400 (Eastern Daylight Time): phantom.act(): 'update_event_2' cannot be run on asset 'splunkmjp'. The "update event" action requires the following parameters: event_ids. The given parameters look like they were automatically generated by phantom.act() because an empty parameters list was passed to phantom.act(). The parameters list may have been empty because the preceding call to phantom.collect2() returned an empty list. Check your calling code in the action that generated this error This looks like the Splunk API block is missing the event_ids field, but the playbook's supposed to feed the value from the collection's artifact:*.cef.notableEventId field. I know that field's populated because there's a filter block that verifies that this field is populated. Debugger also confirmed that the filter condition is valid with the following 2 lines: Mon May 25 2020 19:04:46 GMT-0400 (Eastern Daylight Time): phantom.condition(): condition loop: condition 1, 'A1846671-F6A2-41EC-B0F9-E138A5837C00@@notable@@31ed12911128e0f233ba2bea38f4d3a8' '!=' '' => result:True Mon May 25 2020 19:04:46 GMT-0400 (Eastern Daylight Time): phantom.condition(): returned 1 filtered artifacts AND 0 filtered action results What am I missing that's preventing the Update Event action to work? Any suggestions or tips would be greatly appreciated! Thanks in advance. In case you want to ask "why am I even bother with this use ease". This answer is that this is my interpretation of the Splunk ES and Phantom integration. I want to be able to use Phantom to collect evidence, conduct additional searches and close the incident or case. Then at the same time synchronize the incident status and owner with the associated Notable Event in ES. I'm experimenting using playbook to do the updates in ES to learn about playbook design.
Hi Splunkers, We have an indicator of a phishing source from email headers - a PC name. We need to add it to a Threat Intel collection for ES. I did not find an appropriate one to use. Should we ... See more...
Hi Splunkers, We have an indicator of a phishing source from email headers - a PC name. We need to add it to a Threat Intel collection for ES. I did not find an appropriate one to use. Should we create a custom list or would any of the pre-existing ollections work with it?
When authenticating to the Controller via LDAP as an authentication provider the attached exception occurs. It seems that Appdynamics Controller attaches @customer1 as a suffix. How to disable this... See more...
When authenticating to the Controller via LDAP as an authentication provider the attached exception occurs. It seems that Appdynamics Controller attaches @customer1 as a suffix. How to disable this, we are accessing Appd from different domains. Thank you in advance, Best regards,  Björn [#|2020-05-25T15:04:03.447+0000|SEVERE|glassfish 4.1|com.singularity.ee.controller.servlet.GlassfishLoginServlet|_ThreadID=49;_ThreadName=http-listener-1(1);_TimeMillis=1590419043447;_LevelValue=1000;|ID000066 Error authenticating user java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.singularity.ee.controller.auth.ProgrammaticLoginHelper.login(ProgrammaticLoginHelper.java:95) at com.singularity.ee.controller.servlet.GlassfishLoginServlet.login(GlassfishLoginServlet.java:90) at com.singularity.ee.controller.servlet.GlassfishLoginServlet.doPost(GlassfishLoginServlet.java:162) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:344) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) at com.singularity.ee.controller.servlet.CsrfFilter.doFilter(CsrfFilter.java:113) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) at com.singularity.ee.controller.servlet.RequestOriginMarkingFilter.lambda$doFilter$0(RequestOriginMarkingFilter.java:26) at com.appdynamics.platform.RequestOrigin.runAs(RequestOrigin.java:64) at com.singularity.ee.controller.servlet.RequestOriginMarkingFilter.doFilter(RequestOriginMarkingFilter.java:24) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) at com.singularity.ee.controller.servlet.HttpSecurityHeadersFilter.doFilter(HttpSecurityHeadersFilter.java:106) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) at com.singularity.ee.controller.servlet.HttpSecurityHeadersFilter.doFilter(HttpSecurityHeadersFilter.java:106) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) at com.singularity.ee.controller.servlet.HttpSecurityHeadersFilter.doFilter(HttpSecurityHeadersFilter.java:106) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) at com.singularity.ee.controller.servlet.HttpSecurityHeadersFilter.doFilter(HttpSecurityHeadersFilter.java:106) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) at com.singularity.ee.controller.servlet.HttpSecurityHeadersFilter.doFilter(HttpSecurityHeadersFilter.java:106) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) at com.singularity.ee.controller.servlet.CacheControlFilter.doFilter(CacheControlFilter.java:65) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:316) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734) at org.apache.catalina.core.StandardPipeline.doChainInvoke(StandardPipeline.java:678) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:416) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:283) at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167) at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:206) at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:180) at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235) at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201) at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133) at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112) at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:539) at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:593) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:573) at java.lang.Thread.run(Thread.java:748) Caused by: com.sun.enterprise.security.auth.login.common.LoginException: Login failed: Cannot authenticate user : john.doe%40customer1 at com.sun.enterprise.security.auth.login.LoginContextDriver.doPasswordLogin(LoginContextDriver.java:396) at com.sun.enterprise.security.auth.login.LoginContextDriver.login(LoginContextDriver.java:241) at com.sun.enterprise.security.auth.login.LoginContextDriver.login(LoginContextDriver.java:154) at com.sun.web.security.WebProgrammaticLoginImpl.login(WebProgrammaticLoginImpl.java:125) at com.sun.enterprise.security.ee.auth.login.ProgrammaticLogin$2.run(ProgrammaticLogin.java:292) at com.sun.enterprise.security.ee.auth.login.ProgrammaticLogin$2.run(ProgrammaticLogin.java:290) at java.security.AccessController.doPrivileged(Native Method) at com.sun.enterprise.security.ee.auth.login.ProgrammaticLogin.login(ProgrammaticLogin.java:290) ... 65 more Caused by: javax.security.auth.login.LoginException: Cannot authenticate user : john.doe%40customer1 at com.singularity.ee.controller.auth.ControllerLoginModule.authenticateUser(ControllerLoginModule.java:89) at com.sun.enterprise.security.BasePasswordLoginModule.login(BasePasswordLoginModule.java:145) at sun.reflect.GeneratedMethodAccessor261.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at com.sun.enterprise.security.auth.login.LoginContextDriver.doPasswordLogin(LoginContextDriver.java:383) ... 72 more |#]
It looks like the app is failing to pull data, the logs are telling me no proxy is set and I'm not sure where to set it for this app, most of the other Microsoft apps have a proxy section in the GUI ... See more...
It looks like the app is failing to pull data, the logs are telling me no proxy is set and I'm not sure where to set it for this app, most of the other Microsoft apps have a proxy section in the GUI - can one be added?
Hi, Does anyone how much this App costs, if at all? I can download it for free, but afraid that after a certain trial period I'll be charged (and I don't have clue how much). Thanks! Lior
Using SplunkCloud, we have version 6.1.1 of both the App and the Add-on installed. I'm seeing that Palo Alto logs aren't getting the host field set correctly. Everything else seems to be parsed co... See more...
Using SplunkCloud, we have version 6.1.1 of both the App and the Add-on installed. I'm seeing that Palo Alto logs aren't getting the host field set correctly. Everything else seems to be parsed correctly. We're receiving the data, forwarded using the Log Forwarding App for Cortex. Upon receiving the data, we're extracting the time and host name from the raw syslog event, and sending it to Splunk using the HEC input (using the extracted time as value to _time, extracted host name as host and the raw syslog event, as received by Cortex, as event). Initially, we thought that we'd be able to extract the host name easily, as syslog is pretty clear about what field it should appear in. Unfortunately, when data is forwarded from Cortex, the instance on which the Log Forwarder is running is written to the host field. Thus, by parsing the syslogs for host name and setting it in the Splunk HEC data, we're effectively setting the same host field for every log (logforwarder-somethingsenseless). Given that Cortex apparently doesn't send the raw events, I would have assumed that the Splunk Add-on or TA would parse and rewrite the host field, but that isn't the case. Is our setup of Cortex->Syslog->HEC not supported or is there something else we need to (re)configure in order for Splunk to set the correct host field? Edit: After looking through the App and Add-on source, I don't see the host field being set anywhere.
Hello guys, is it possible to limit Heavy forwarders bandwidth like UF (setting [thruput] in limits.conf for forwarders)? Thanks.
II am using this lookup for bot status. I am using the "submit" button to save the status info. (disconnected or connected) I have added a screenshot: | inputlookup status.csv | appe... See more...
II am using this lookup for bot status. I am using the "submit" button to save the status info. (disconnected or connected) I have added a screenshot: | inputlookup status.csv | append [ makeresults | eval Time= strftime(_time,"%Y-%m-%d %H:%M:%S") | eval "DI Name"="I9", "Bot Name"="CD1","Support poc"="sam","Support Team"="IA",Status="disconnected"] | top "DI Name" "Bot Name" "Support poc" "Support Team" Status Time | table "DI Name" "Bot Name" "Support poc" "Support Team" Status Time | outputlookup status.csv | head 1
Hi, I am sending a part of the logs to separate Splunk instances. Will both Splunk licenses will be impacted?
Hello, As we are running a lower version of Splunk Cloud instance, can we get the TA checked for compatibility for 7.0 and 7.1? Are there any known issues of compatibility for those earlier ve... See more...
Hello, As we are running a lower version of Splunk Cloud instance, can we get the TA checked for compatibility for 7.0 and 7.1? Are there any known issues of compatibility for those earlier versions?
Hi all, I have this search: |table a b date |eval c=a-b |stats sum(*) as * by date date a b c 2019-01 5 3 2 2019-02 4 3 1 2019-03 3 2 1 2019-04 6 3 3 I want to make it like this: Date ... See more...
Hi all, I have this search: |table a b date |eval c=a-b |stats sum(*) as * by date date a b c 2019-01 5 3 2 2019-02 4 3 1 2019-03 3 2 1 2019-04 6 3 3 I want to make it like this: Date d a b c 2019-01 0 5 3 2 2019-02 2 4 3 3 2019-03 3 3 2 4 2019-04 4 6 3 7 My formula is this: a-b=c Next month, I want to add c value, like this: (c+a)-b=x next month (x+a)-b=y (y+a)-b=z ...... I can understand this formula, but I can not express it in Splunk. Do you have any ideas? Thank you for helping.
Hi all I want to add new line in dashboard title tag for instance, <table> <title> hello world</title> </table> result value is below hello world Is it possible thing? Tha... See more...
Hi all I want to add new line in dashboard title tag for instance, <table> <title> hello world</title> </table> result value is below hello world Is it possible thing? Thanks all
I setup testing.csv lookup as following host,location 123,HK 234,US 345,UK I would like to basic search if host matched in the log, stats count by location index=log sourcetype=csv |search [|i... See more...
I setup testing.csv lookup as following host,location 123,HK 234,US 345,UK I would like to basic search if host matched in the log, stats count by location index=log sourcetype=csv |search [|inputlookup testing | return $host] | stats .... by location But seems return nothings
Hi, I am using splunk 8.0.1 on windows OS with Python3. With help of @woodcock answer from https://answers.splunk.com/answers/489475/how-configure-an-alert-to-send-an-email-based-on-f.html I was ... See more...
Hi, I am using splunk 8.0.1 on windows OS with Python3. With help of @woodcock answer from https://answers.splunk.com/answers/489475/how-configure-an-alert-to-send-an-email-based-on-f.html I was trying below search- Your Base Search Here | outputlookup MyTempLookup.csv | stats count by EmailContact | map maxsearches=9999 search="|inputlookup MyTempLookup.csv | search EmailContact=$EmailContact$ | sendemail to=\"$EmailContact$\" format=raw subject=myresults sendresults=true" But the search is returning results but it is not sending any mail. and when I checked internal logs below error I observed- ERROR sendemail:1428 - [HTTP 403] Client is not authorized to perform requested action; Traceback (most recent call last): File "D:\Program Files\Splunk\etc\apps\search\bin\sendemail.py", line 1421, in <module> results = sendEmail(results, settings, keywords, argvals) File "D:\Program Files\Splunk\etc\apps\search\bin\sendemail.py", line 400, in sendEmail jobResponseHeaders, jobResponseBody = simpleRequest(uriToJob, method='GET', getargs={'output_mode':'json'}, sessionKey=sessionKey) File "D:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\rest\__init__.py", line 559, in simpleRequest raise splunk.AuthorizationFailed(extendedMessages=uri) splunk.AuthorizationFailed: [HTTP 403] Client is not authorized to perform requested action Note- If I try using only sendemail command then it is working and I am able to receive email. Is there anything needs to be modified in sendemail.py command for python3. Below sample query I tried but it is not sending any email- index=_internal|stats count by sourcetype|eval EmailContact=if(sourcetype="splunkd","email@id.com","email2@id.com") | outputlookup MyTempLookup.csv| stats values(EmailContact) AS emailToHeader| mvexpand emailToHeader| map search="|inputlookup MyTempLookup.csv | where EmailContact=\"$emailToHeader$\" | fields - EmailContact | sendemail sendresults=true inline=true to=\"$emailToHeader$\" subject=\"Your Subject here: \" message=\"This report alert was generated by \$app\$ Splunk with this search string: \" Any idea what mistake I am doing in above query. Thanks.
What does |rename field* AS * do. How to rename the fields when there are more no.of fields. Thanks
Hello splunker, we have one test case in which we have to monitor one csv file(1K records) for any change. If we add any row or update any thing for nnumber of times then also this file need to be... See more...
Hello splunker, we have one test case in which we have to monitor one csv file(1K records) for any change. If we add any row or update any thing for nnumber of times then also this file need to be ingested in splunk index. Please help me to find the solution of this test case.
Hi all, I want to send syslog data directly from a Fortinet Firewall (remote site) to our Splunk Server via Internet. I want to encrypt the data while transferring the data via the Internet. Is i... See more...
Hi all, I want to send syslog data directly from a Fortinet Firewall (remote site) to our Splunk Server via Internet. I want to encrypt the data while transferring the data via the Internet. Is it possible to do this without a syslog collection and a forwarder? Tks
Hi Guys, I tried to run Splunk but it gives errors as below: [splunk@ip-172-31-10-67 bin]$ sudo ./splunk enable boot-start -user splunk --accept-license Warning: cannot create "/opt/splunk/va... See more...
Hi Guys, I tried to run Splunk but it gives errors as below: [splunk@ip-172-31-10-67 bin]$ sudo ./splunk enable boot-start -user splunk --accept-license Warning: cannot create "/opt/splunk/var/log/splunk" Warning: cannot create "/opt/splunk/var/log/introspection" Warning: cannot create "/opt/splunk/var/log/watchdog" Warning: cannot create "/opt/splunk/etc/licenses/download-trial" First-time run failed! [splunk@ip-172-31-10-67 bin]$ sudo ./splunk start --accept-license Warning: cannot create "/opt/splunk/var/log/splunk" Warning: cannot create "/opt/splunk/var/log/introspection" Warning: cannot create "/opt/splunk/var/log/watchdog" Warning: cannot create "/opt/splunk/etc/licenses/download-trial" How to resolve these warnings? Also is it possible to provide a response file when we run the Splunk for the first time so I do not have to type the responses thereby we can automate without manual intervention? Please assist.