All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi In the example below, I clearly understand that the "hello world" will be updated in a Splunk event { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-ge... See more...
Hi In the example below, I clearly understand that the "hello world" will be updated in a Splunk event { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello world!" } curl -H "Authorization: Splunk 12345678-1234-1234-1234-1234567890AB" https://localhost:8088/services/collector/event -d '{"event":"hello world"}' Now imagine that my json file contains many items like below { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello world!" } { "time": 1426279538, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello eveybody!" } Is the curl command to use should be like this? curl -H "Authorization: Splunk 12345678-1234-1234-1234-1234567890AB" https://localhost:8088/services/collector/event -d '{"event":}'  Last question : instead using a prompt command to send the json logs in Splunk, is it possible to use a json script to do that? Or something else Is anybody has good examples of that? thanks
My current Splunk infra setup is clustered for Search Heads and Indexers. and  we are using deployer and cluster master to manage configs for the respective SH and IDX. For example, can I manually pl... See more...
My current Splunk infra setup is clustered for Search Heads and Indexers. and  we are using deployer and cluster master to manage configs for the respective SH and IDX. For example, can I manually placed an updated config in SH1 and then run a rolling restart so they will sync/replicate with each other. ? This is in the event the Deployer is down. But eventually once the Deployer is up , we will place the updated config in Deployer. So that when we run a sync, it will not affect/remove the file from the SH cluster. Will there be any issues in this scenario?
so after opening a case with Splunk tech support because we were unable to upgrade in place our Windows 2019 Servers from Splunk version 9.0.0 to 9.1.1 we were instructed to backup the ETC directory ... See more...
so after opening a case with Splunk tech support because we were unable to upgrade in place our Windows 2019 Servers from Splunk version 9.0.0 to 9.1.1 we were instructed to backup the ETC directory than uninstall Splunk and do a new install of 9.1.1 then copy the old ETC directory back over  well we did just that except we also put it on new/different hardware and now we can't log in to Splunk Web, we get the login screen it takes our credentials and then we get the three dots of death ... any help / advice is tremendously appreciated         
I would like to allow list a url from my dashboards so that no more redirection warnings pop up.  Per the documentation, I can do this by editing web-features.conf on my SHs. What would be the prope... See more...
I would like to allow list a url from my dashboards so that no more redirection warnings pop up.  Per the documentation, I can do this by editing web-features.conf on my SHs. What would be the proper way to push this as a bundle? I tried creating and modifying web-features.conf  in an app context on the SHC Deployer (../shcluster/apps/myapp/default/web-features.conf) directory but I still got the pop up (yes, I restarted the SHs). After using "apply shcluster-bundle",  I used btool AND show config to verify the config changes appeared on the SHs. No dice. If I modify web-features.conf directly on the SHs (../etc/system/local/web-features.conf), it works perfectly. Thank you! my edited web-features.conf below: [feature:dashboards_csp] dashboards_trusted_domain.domain1 = *myurl.com  
how to convert below json array to table {   "Group10": {     "owner": "Abishek Kasetty",     "fail": 2,     "total": 12,     "agile_team": "Punchout_ReRun",     "test": "",     "pass": 6,  ... See more...
how to convert below json array to table {   "Group10": {     "owner": "Abishek Kasetty",     "fail": 2,     "total": 12,     "agile_team": "Punchout_ReRun",     "test": "",     "pass": 6,     "report": "",     "executed_on": "Mon Oct 23 03:10:48 EDT 2023",     "skip": 0,     "si_no": "10"   },   "Group09": {     "owner": "Lavanya Kavuru",     "fail": 45,     "total": 190,     "agile_team": "Hawks_ReRun",     "test": "",     "pass": 42,     "report": "",     "executed_on": "Sun Oct 22 02:57:43 EDT 2023",     "skip": 0,     "si_no": "09"   } } Expected Output ________________________  ________________________  ________________________ agile_team                                              pass                                                       fail ________________________  ________________________  ________________________ Hawks_ReRun                                           42                                                      45
Hi Team, We are observing discrepancy in calculation when the timestamp is less the 100ms. Example: Response time: “2023-10-23 14:46:14.84” Request time: “2023-10-23 14:46:13.948”   When “Respo... See more...
Hi Team, We are observing discrepancy in calculation when the timestamp is less the 100ms. Example: Response time: “2023-10-23 14:46:14.84” Request time: “2023-10-23 14:46:13.948”   When “Response time – Request time” value should be “136ms” but in Splunk it showing as “890ms”.   While calculating Splunk is considering inbound value as ““2023-10-23 14:46:14.840ms” instead of “.84ms” as its in 2 digits. So, is there any possibility to resolve this discrepancy from the Splunk query level or .conf level.    Regards, Siva.
Hi All,   Splunk "head" command by default retrieves top 10 columns and 10 results. may i know if we can control the number of columns to be retrieved. index= <Splunk query>| timechart span=15m co... See more...
Hi All,   Splunk "head" command by default retrieves top 10 columns and 10 results. may i know if we can control the number of columns to be retrieved. index= <Splunk query>| timechart span=15m count by dest_domain usenull=f useother=f | head 15 e.g. _time|column1|...............................................................|coulmn15 1 2 - - 15
I am using the REST API knowledge in - https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-DB-Connect-V3-How-create-Automated-Programmatic-creation/m-p/304409 curl -k -u admin -X POST... See more...
I am using the REST API knowledge in - https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-DB-Connect-V3-How-create-Automated-Programmatic-creation/m-p/304409 curl -k -u admin -X POST  https://va10plvspl344.wellpoint.com:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/identities/ID_NAME  -d "{\"name\":\"ID_NAME\",\"username\":\"myuser\",\"password\":\"mypwd\!123\",\"domain_name\":\"us\",\"use_win_auth\":true,\"port\":null}" Enter host password for user 'admin': {"code":500,"message":"There was an error processing your request. It has been logged (ID 7e7f61e0fdee4a8d)."} This is the internal log msg:  2023-10-23 12:16:27.544 -0400 [dw-1667536 - POST /api/identities/ID_NAME] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: 3eb5e19db5c58d19 java.lang.IllegalArgumentException: Invalid URL port: "null" at okhttp3.HttpUrl$Builder.parse$okhttp(HttpUrl.kt:1329) at okhttp3.HttpUrl$Companion.get(HttpUrl.kt:1633) at okhttp3.HttpUrl.get(HttpUrl.kt) at com.splunk.dbx.server.cyberark.data.ConnectionSettings.getUrl(ConnectionSettings.java:19) at com.splunk.dbx.server.cyberark.api.RetrofitClient.getClient(RetrofitClient.java:11) at com.splunk.dbx.server.cyberark.runner.CyberArkAccessImpl.setSettings(CyberArkAccessImpl.java:39) at com.splunk.dbx.server.cyberark.runner.CyberArkAccessImpl.<init>(CyberArkAccessImpl.java:24) at com.splunk.dbx.server.api.service.conf.impl.IdentityServiceImpl.prepareConnection(IdentityServiceImpl.java:96) at com.splunk.dbx.server.api.service.conf.impl.IdentityServiceImpl.getPassword(IdentityServiceImpl.java:132) at com.splunk.dbx.server.api.service.conf.impl.IdentityServiceImpl.updatePassword(IdentityServiceImpl.java:119) at com.splunk.dbx.server.api.resource.IdentityResource.updatePassword(IdentityResource.java:71) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:159) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1651) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:567) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1377) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:507) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1292) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:501) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:750)
Is it possible to import an already created lookup table into the Splunk lookup file editor without having to create a new one in the editor? If it is possible, how do I do it?
Before I start, I've view TreeMap and Word Tree visualisations but they don't seem to do what I need (happy to be proven wrong though) We use workday, we export the complete org hierarchy from wor... See more...
Before I start, I've view TreeMap and Word Tree visualisations but they don't seem to do what I need (happy to be proven wrong though) We use workday, we export the complete org hierarchy from workday and ingest that into a lookup table every day.   The data contains - Name - OrgPosition- Manager - MiscDetails So Name=Dave Bunn OrgPosition=12345_Dave_Bunn Manager=1230_Mrs_Bunn MiscDetails="some text about my job" We then use the manager detail in the OrgPosition field to look for their manager and so on until we come across as service level manager (indicated in the misc details filed) Name=Mrs Bunn OrgPosition=1230_Mrs_Bunn Manager=10_The_Big_Boss MiscDetails="some text about Mrs Bunns job" Name=Big Boss OrgPosition=10_The_Big_Boss Manager=0_The_Director MiscDetails="Manager of HR" What I would like to do is programmatically generate a hierarchy for any inputted user - with the named individual listed in the middle, their managers above and subordinates below. I would like a visualisation similar to Word Tree Viz, but accept that it's more likely going to have to look like the principal name sandwiched beteen two fileds - one containing sorted managers and one containing sorted subordinates.
Hi, I'm trying to reduce the noise out of these EventCodes which we can exclude in the enterprise security point of view. Below are my stats of EventCodes, Could any one pls guide me in this  ... See more...
Hi, I'm trying to reduce the noise out of these EventCodes which we can exclude in the enterprise security point of view. Below are my stats of EventCodes, Could any one pls guide me in this  EventCode count 4624 25714108 4799 12271228 5140 4180598 4672 2896823 4769 2871064 4776 2177516 4798 1771003 4768 1149826 4662 919694 4793 667396 4627 428382 4771 344400 4702 261942 4625 229393 4698 131404 4699 107254 5059 92679 4611 86837 5379 74950 4735 55988 4770 31850 4946 31586 4719 30067 4688 27561 4948 26952 4945 19959 4648 17191 4825 17016 4697 13155 6416 6977 Thanks
hello, I am trying to gather important logs from the daemons ( in order to forward them to an external siem), that I could use to fire an alert when one of the following occurs: 1. an automated pla... See more...
hello, I am trying to gather important logs from the daemons ( in order to forward them to an external siem), that I could use to fire an alert when one of the following occurs: 1. an automated playbook failed to run 2. an action failed to work 3. phantom was not able to ingest all the data forwarded to it and a data was loss 4. a process (daemon) stopped working 5. System Health - CPU, memory usage, disk usage i have read the article "Configure the logging levels for Splunk Phantom daemons (link: https://docs.splunk.com/Documentation/Phantom/4.10.7/Admin/Debug) but I would need to identify the relevant log that tells each.  I would appreciate your help on this.
I have a large KV store lookup (approximately 1.5-2 million rows and 4 columns), and I need to create a search that adds 2 new columns into it from corresponding data. Essentially the lookup is lik... See more...
I have a large KV store lookup (approximately 1.5-2 million rows and 4 columns), and I need to create a search that adds 2 new columns into it from corresponding data. Essentially the lookup is like this: Server Time Variable1 Variable2   and I need it to look like this: Server Time Variable1 Variable2 Variable3 Variable4   My current search is like this: index=index sourcetype=sourcetype | stats count by Server Time Variable3 Variable4 | fields- count | lookup mylookup Server Time OUTPUT Variable1 Variable2 | outputlookup mylookup The problem I'm running into is that the search gets caught on that lookup command for 2+ hours, and I'm not sure why it's taking so long to match that data.  Does anyone have any insight on why that is occurring or how I can restructure my search to accomplish my need in a more efficient way? Or would it be better to try updating the kv store via restAPI from the same script that is generating variable3 and variable4?
How to create total average/median/max of field in a separate table? Thank you in advance | index=testindex | table company, ip, Vulnerability, Score company ip Vulnerability Score C... See more...
How to create total average/median/max of field in a separate table? Thank you in advance | index=testindex | table company, ip, Vulnerability, Score company ip Vulnerability Score CompanyA ip1 Vuln1 2 CompanyA ip1 Vuln2 0 CompanyA ip2 Vuln3 4 CompanyA ip2 Vuln4 2 CompanyA ip3 Vuln5 3 CompanyA ip3 Vuln6 5 Group by IP  => This worked just fine | stats values(company), avg(Score) as AvgScore by ip company ip AvgScore CompanyA ip1 1 CompanyA ip2 3 CompanyA ip3 4 Group by Company   =>  how do I group by company after group by ip (using stats) and put it on a separate table? | stats avg(AvgScore) as Average, avgAvgScore) as Median, max( AvgScore) as Max by company Company Average Median Max CompanyA 2.7 3 4
hi, I see that playbooks ID keep changing all the time. can anyone explain the reasons to it?     thank you,   Daniel    
Hello, I have installed the Add on for Microsoft Azure. How can i get data in from Azure Service Bus?
Hi, I have onboarded my Splunk to the LDAP and subsequently mapped the AD group to the respective roles in Splunk. However, I have noted that the users are not populated or shown in the "Users" in t... See more...
Hi, I have onboarded my Splunk to the LDAP and subsequently mapped the AD group to the respective roles in Splunk. However, I have noted that the users are not populated or shown in the "Users" in the web ui. I have asked user which I have mapped the roles to login (LDAP authentication) and they are able to login and search. There are no existing local account for the user.  Running Splunk Enterprise v9.0.6.  Appreciate if anyone can help with this.  Thanks.
Hi, I have a query that trigger when a user has been added to a specific types of groups. The query depends on lookup with 2 columns inside (one for group_name, Another for Severity). I want to fi... See more...
Hi, I have a query that trigger when a user has been added to a specific types of groups. The query depends on lookup with 2 columns inside (one for group_name, Another for Severity). I want to find any event of adding to one of the monitored groups, But also to enrich my final table with the severity right next to the group_name. I have tried to resolve this using: | join type=left group_name [| inputlookup my_list.csv] | where isnotnull(Severity) But somehow only 2 groups with low severity is being found even though all the groups in the list has its own severity. How can I managed to make my table show the group with its severity?
Dears How to find out what Devices (Switch, Router, etc.), operating systems (Windows, linux, MacOs, etc.), applications(web application, mobile application, etc.) and services (database server, web... See more...
Dears How to find out what Devices (Switch, Router, etc.), operating systems (Windows, linux, MacOs, etc.), applications(web application, mobile application, etc.) and services (database server, web server, etc. ) does Splunk Enterprise Security support? And also the support for Search head and Indexer OS, Is it windows server or Linux? because I could not find out in their documentation  or over the internet   Thank you in advance!    
Hi There, I am currently trying to set up a universal forwarder on a CentOS 7 server. I have extracted the package and am attempting to start the service but receive the following:    ERROR: mgmt ... See more...
Hi There, I am currently trying to set up a universal forwarder on a CentOS 7 server. I have extracted the package and am attempting to start the service but receive the following:    ERROR: mgmt port [8089] - port is already bound. Splunk needs to use this port.   The port is in fact in use and thus the only solution seems to be using a non-default port. Will this cause any issues in the long run or is running Splunk on different ports completely supported? As I have only ever used the default ports for previous installations. Any help/info would be appreciated, Jamie