All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Three years later and this worked! Thanks!!
To be pedantic, Splunk doesn't have "columns" in the DBA sense.  We call them "fields". The head command returns all fields in the first n results.  The fields to return can be controlled with the f... See more...
To be pedantic, Splunk doesn't have "columns" in the DBA sense.  We call them "fields". The head command returns all fields in the first n results.  The fields to return can be controlled with the fields command. index= <Splunk query> | fields _time column1 column2 ... column15 | timechart span=15m count by dest_domain usenull=f useother=f | head 15 In this case, however, head is unnecessary because timechart can do the same thing. index= <Splunk query> | fields _time column1 column2 ... column15 | timechart limit=15 span=15m count by dest_domain usenull=f useother=f  
Hi @Yann.Buccellato, Let me share this with the Docs team and see if we can get this cleared up!
Please give us your definition of "noise". Do none of your other questions on the same topic address this, too? Have you considered using Ingest Actions to avoid indexing unwanted data?  See https:... See more...
Please give us your definition of "noise". Do none of your other questions on the same topic address this, too? Have you considered using Ingest Actions to avoid indexing unwanted data?  See https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_in_Splunk_Enterprise and https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/DataIngest
Hi All,   Splunk "head" command by default retrieves top 10 columns and 10 results. may i know if we can control the number of columns to be retrieved. index= <Splunk query>| timechart span=15m co... See more...
Hi All,   Splunk "head" command by default retrieves top 10 columns and 10 results. may i know if we can control the number of columns to be retrieved. index= <Splunk query>| timechart span=15m count by dest_domain usenull=f useother=f | head 15 e.g. _time|column1|...............................................................|coulmn15 1 2 - - 15
I am using the REST API knowledge in - https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-DB-Connect-V3-How-create-Automated-Programmatic-creation/m-p/304409 curl -k -u admin -X POST... See more...
I am using the REST API knowledge in - https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-DB-Connect-V3-How-create-Automated-Programmatic-creation/m-p/304409 curl -k -u admin -X POST  https://va10plvspl344.wellpoint.com:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/identities/ID_NAME  -d "{\"name\":\"ID_NAME\",\"username\":\"myuser\",\"password\":\"mypwd\!123\",\"domain_name\":\"us\",\"use_win_auth\":true,\"port\":null}" Enter host password for user 'admin': {"code":500,"message":"There was an error processing your request. It has been logged (ID 7e7f61e0fdee4a8d)."} This is the internal log msg:  2023-10-23 12:16:27.544 -0400 [dw-1667536 - POST /api/identities/ID_NAME] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: 3eb5e19db5c58d19 java.lang.IllegalArgumentException: Invalid URL port: "null" at okhttp3.HttpUrl$Builder.parse$okhttp(HttpUrl.kt:1329) at okhttp3.HttpUrl$Companion.get(HttpUrl.kt:1633) at okhttp3.HttpUrl.get(HttpUrl.kt) at com.splunk.dbx.server.cyberark.data.ConnectionSettings.getUrl(ConnectionSettings.java:19) at com.splunk.dbx.server.cyberark.api.RetrofitClient.getClient(RetrofitClient.java:11) at com.splunk.dbx.server.cyberark.runner.CyberArkAccessImpl.setSettings(CyberArkAccessImpl.java:39) at com.splunk.dbx.server.cyberark.runner.CyberArkAccessImpl.<init>(CyberArkAccessImpl.java:24) at com.splunk.dbx.server.api.service.conf.impl.IdentityServiceImpl.prepareConnection(IdentityServiceImpl.java:96) at com.splunk.dbx.server.api.service.conf.impl.IdentityServiceImpl.getPassword(IdentityServiceImpl.java:132) at com.splunk.dbx.server.api.service.conf.impl.IdentityServiceImpl.updatePassword(IdentityServiceImpl.java:119) at com.splunk.dbx.server.api.resource.IdentityResource.updatePassword(IdentityResource.java:71) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:159) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1651) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:567) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1377) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:507) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1292) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:501) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:750)
Hi @waJesu, you have two choices: create a new lookup clicking on "Create a new Lookup" from the Lookup Editor app and importing and saving the csv file, import the lookup from the Settings menu:... See more...
Hi @waJesu, you have two choices: create a new lookup clicking on "Create a new Lookup" from the Lookup Editor app and importing and saving the csv file, import the lookup from the Settings menu: [Settings > Lookups > Lookup Table Files > New Lookup Table Files]. In both cases, remember to create the Lookup Definition, because it isn't automatic. Ciao. Giuseppe
Is it possible to import an already created lookup table into the Splunk lookup file editor without having to create a new one in the editor? If it is possible, how do I do it?
This error exists since the KVstore is being used as opposed to a CSV file and does not interfere with the functionality of lookup creation.   See the known issue at: https://splunk-sa-crowdstrike.... See more...
This error exists since the KVstore is being used as opposed to a CSV file and does not interfere with the functionality of lookup creation.   See the known issue at: https://splunk-sa-crowdstrike.ztsplunker.com/releases/issues/
Before I start, I've view TreeMap and Word Tree visualisations but they don't seem to do what I need (happy to be proven wrong though) We use workday, we export the complete org hierarchy from wor... See more...
Before I start, I've view TreeMap and Word Tree visualisations but they don't seem to do what I need (happy to be proven wrong though) We use workday, we export the complete org hierarchy from workday and ingest that into a lookup table every day.   The data contains - Name - OrgPosition- Manager - MiscDetails So Name=Dave Bunn OrgPosition=12345_Dave_Bunn Manager=1230_Mrs_Bunn MiscDetails="some text about my job" We then use the manager detail in the OrgPosition field to look for their manager and so on until we come across as service level manager (indicated in the misc details filed) Name=Mrs Bunn OrgPosition=1230_Mrs_Bunn Manager=10_The_Big_Boss MiscDetails="some text about Mrs Bunns job" Name=Big Boss OrgPosition=10_The_Big_Boss Manager=0_The_Director MiscDetails="Manager of HR" What I would like to do is programmatically generate a hierarchy for any inputted user - with the named individual listed in the middle, their managers above and subordinates below. I would like a visualisation similar to Word Tree Viz, but accept that it's more likely going to have to look like the principal name sandwiched beteen two fileds - one containing sorted managers and one containing sorted subordinates.
Hi, I'm trying to reduce the noise out of these EventCodes which we can exclude in the enterprise security point of view. Below are my stats of EventCodes, Could any one pls guide me in this  ... See more...
Hi, I'm trying to reduce the noise out of these EventCodes which we can exclude in the enterprise security point of view. Below are my stats of EventCodes, Could any one pls guide me in this  EventCode count 4624 25714108 4799 12271228 5140 4180598 4672 2896823 4769 2871064 4776 2177516 4798 1771003 4768 1149826 4662 919694 4793 667396 4627 428382 4771 344400 4702 261942 4625 229393 4698 131404 4699 107254 5059 92679 4611 86837 5379 74950 4735 55988 4770 31850 4946 31586 4719 30067 4688 27561 4948 26952 4945 19959 4648 17191 4825 17016 4697 13155 6416 6977 Thanks
Yes, the lookup definition contains all necessary fields, including the new ones. I am wanting to update the same Server and Time rows as well as add new ones. I'm using Splunk cloud, so I'm not su... See more...
Yes, the lookup definition contains all necessary fields, including the new ones. I am wanting to update the same Server and Time rows as well as add new ones. I'm using Splunk cloud, so I'm not sure there's a way to use accelerated fields?
@meshorer  For all apart from the action failure (which you can get from the external splunk option, but also from the below app) you will need to install a Universal Forwarder on the box and instal... See more...
@meshorer  For all apart from the action failure (which you can get from the external splunk option, but also from the below app) you will need to install a Universal Forwarder on the box and install the Splunk App for SOAR on it and point it to your indexing layer, or HFW as an interim forwarding layer. This comes with the ingestion config for all DAEMON logs out of the box.  Instructions are here: https://docs.splunk.com/Documentation/SOARApp/1.0.41/Install/ConfigureITSI  decided actiond ingestd Could be all daemon logs under sourcetype splunk_app_soar:daemon? Not a daemon but will likely need https://splunkbase.splunk.com/app/3412 as the app only ingests linux:audit from what I can see.    --- Hope this helps! If so please mark as a solution for future readers. Happy SOARing! --
Does the mylookup definition contain Variable1 and Variable2 as fields? You will have to edit the collections.conf to add those new fields. Also, are you wanting to update the same Server and Time ... See more...
Does the mylookup definition contain Variable1 and Variable2 as fields? You will have to edit the collections.conf to add those new fields. Also, are you wanting to update the same Server and Time row in the lookup - if so you will need to also output the _key variable from the lookup as that needs to be used by the lookup to update the same row. In this case, you look like you are adding new rows to the lookup. Do you have accelerated fields in your lookup - see https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usingconfigurationfiles/#Accelerate-fields that will improve the lookup time.  
If you're referring to https://splunkbase.splunk.com/app/3757 then the TA comes with enabled inputs that should fetch events for you.  On which instance(s) did you install the TA?  Have you configure... See more...
If you're referring to https://splunkbase.splunk.com/app/3757 then the TA comes with enabled inputs that should fetch events for you.  On which instance(s) did you install the TA?  Have you configured it?
"tracks" meaning that I plan to monitor logs to fire an alert when for example a playbook fails to execute. in that case, I would probably need to identify which is the failing playbook by it's ID. ... See more...
"tracks" meaning that I plan to monitor logs to fire an alert when for example a playbook fails to execute. in that case, I would probably need to identify which is the failing playbook by it's ID. I have posted a new question about it thank you
The only way you can make colour coding on cells like that is to use the format expression, but the expression you would need does not appear to work, i.e. <format type="color" field="Conn_Max"> <... See more...
The only way you can make colour coding on cells like that is to use the format expression, but the expression you would need does not appear to work, i.e. <format type="color" field="Conn_Max"> <colorPalette type="expression">case(tonumber(mvindex(split(value,"-"), 0)) &gt; (tonumber(mvindex(split(value,"-"), 1)) * .8), "#FF0000", tonumber(mvindex(split(value,"-"), 0)) &gt; (tonumber(mvindex(split(value,"-"), 1)) * .5), "#FFFFCC", 1==1, "#00FF00")</colorPalette> </format> What this attempts to do is to split the numbers and calculate the values, but it does not work - I believe there is a bug when using multivalue eval expressions. Otherwise, I don't believe it's possible to do it this way, you would have to do it differently
hello, I am trying to gather important logs from the daemons ( in order to forward them to an external siem), that I could use to fire an alert when one of the following occurs: 1. an automated pla... See more...
hello, I am trying to gather important logs from the daemons ( in order to forward them to an external siem), that I could use to fire an alert when one of the following occurs: 1. an automated playbook failed to run 2. an action failed to work 3. phantom was not able to ingest all the data forwarded to it and a data was loss 4. a process (daemon) stopped working 5. System Health - CPU, memory usage, disk usage i have read the article "Configure the logging levels for Splunk Phantom daemons (link: https://docs.splunk.com/Documentation/Phantom/4.10.7/Admin/Debug) but I would need to identify the relevant log that tells each.  I would appreciate your help on this.
@meshorer Understood but I am just wondering that you mean by tracking as the system "tracks" them.  Yes there is a rest call to find the name based on the id: xxx/rest/playbook/<id>/name If you... See more...
@meshorer Understood but I am just wondering that you mean by tracking as the system "tracks" them.  Yes there is a rest call to find the name based on the id: xxx/rest/playbook/<id>/name If you need to find an ID based on the name then you can also: xxx/rest/playbook?_filter_name="<name>" -- Happy SOARing! Please mark as a solution for future readers if it resolved your issue. --
A separate table requires a separate search. If this is in a dashboard then consider making the first table a base search and the second table a post-processing of the first.  That will save you tim... See more...
A separate table requires a separate search. If this is in a dashboard then consider making the first table a base search and the second table a post-processing of the first.  That will save you time and resources when the dashboard runs. ... <search id="base"> <!-- "base" can be any string --> <query>| index=testindex | table company, ip, Vulnerability, Score | stats values(company), avg(Score) as AvgScore by ip</query> ... </search> ... <search base="base"> <!-- Use the same string as above --> <query>| stats avg(AvgScore) as Average, avgAvgScore) as Median, max( AvgScore) as Max by company</query> </search> ...