All Topics

Top

All Topics

I am using the REST API knowledge in - https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-DB-Connect-V3-How-create-Automated-Programmatic-creation/m-p/304409 curl -k -u admin -X POST... See more...
I am using the REST API knowledge in - https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-DB-Connect-V3-How-create-Automated-Programmatic-creation/m-p/304409 curl -k -u admin -X POST  https://va10plvspl344.wellpoint.com:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/identities/ID_NAME  -d "{\"name\":\"ID_NAME\",\"username\":\"myuser\",\"password\":\"mypwd\!123\",\"domain_name\":\"us\",\"use_win_auth\":true,\"port\":null}" Enter host password for user 'admin': {"code":500,"message":"There was an error processing your request. It has been logged (ID 7e7f61e0fdee4a8d)."} This is the internal log msg:  2023-10-23 12:16:27.544 -0400 [dw-1667536 - POST /api/identities/ID_NAME] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: 3eb5e19db5c58d19 java.lang.IllegalArgumentException: Invalid URL port: "null" at okhttp3.HttpUrl$Builder.parse$okhttp(HttpUrl.kt:1329) at okhttp3.HttpUrl$Companion.get(HttpUrl.kt:1633) at okhttp3.HttpUrl.get(HttpUrl.kt) at com.splunk.dbx.server.cyberark.data.ConnectionSettings.getUrl(ConnectionSettings.java:19) at com.splunk.dbx.server.cyberark.api.RetrofitClient.getClient(RetrofitClient.java:11) at com.splunk.dbx.server.cyberark.runner.CyberArkAccessImpl.setSettings(CyberArkAccessImpl.java:39) at com.splunk.dbx.server.cyberark.runner.CyberArkAccessImpl.<init>(CyberArkAccessImpl.java:24) at com.splunk.dbx.server.api.service.conf.impl.IdentityServiceImpl.prepareConnection(IdentityServiceImpl.java:96) at com.splunk.dbx.server.api.service.conf.impl.IdentityServiceImpl.getPassword(IdentityServiceImpl.java:132) at com.splunk.dbx.server.api.service.conf.impl.IdentityServiceImpl.updatePassword(IdentityServiceImpl.java:119) at com.splunk.dbx.server.api.resource.IdentityResource.updatePassword(IdentityResource.java:71) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:159) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1651) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:567) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1377) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:507) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1292) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:501) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:750)
Is it possible to import an already created lookup table into the Splunk lookup file editor without having to create a new one in the editor? If it is possible, how do I do it?
Before I start, I've view TreeMap and Word Tree visualisations but they don't seem to do what I need (happy to be proven wrong though) We use workday, we export the complete org hierarchy from wor... See more...
Before I start, I've view TreeMap and Word Tree visualisations but they don't seem to do what I need (happy to be proven wrong though) We use workday, we export the complete org hierarchy from workday and ingest that into a lookup table every day.   The data contains - Name - OrgPosition- Manager - MiscDetails So Name=Dave Bunn OrgPosition=12345_Dave_Bunn Manager=1230_Mrs_Bunn MiscDetails="some text about my job" We then use the manager detail in the OrgPosition field to look for their manager and so on until we come across as service level manager (indicated in the misc details filed) Name=Mrs Bunn OrgPosition=1230_Mrs_Bunn Manager=10_The_Big_Boss MiscDetails="some text about Mrs Bunns job" Name=Big Boss OrgPosition=10_The_Big_Boss Manager=0_The_Director MiscDetails="Manager of HR" What I would like to do is programmatically generate a hierarchy for any inputted user - with the named individual listed in the middle, their managers above and subordinates below. I would like a visualisation similar to Word Tree Viz, but accept that it's more likely going to have to look like the principal name sandwiched beteen two fileds - one containing sorted managers and one containing sorted subordinates.
Hi, I'm trying to reduce the noise out of these EventCodes which we can exclude in the enterprise security point of view. Below are my stats of EventCodes, Could any one pls guide me in this  ... See more...
Hi, I'm trying to reduce the noise out of these EventCodes which we can exclude in the enterprise security point of view. Below are my stats of EventCodes, Could any one pls guide me in this  EventCode count 4624 25714108 4799 12271228 5140 4180598 4672 2896823 4769 2871064 4776 2177516 4798 1771003 4768 1149826 4662 919694 4793 667396 4627 428382 4771 344400 4702 261942 4625 229393 4698 131404 4699 107254 5059 92679 4611 86837 5379 74950 4735 55988 4770 31850 4946 31586 4719 30067 4688 27561 4948 26952 4945 19959 4648 17191 4825 17016 4697 13155 6416 6977 Thanks
hello, I am trying to gather important logs from the daemons ( in order to forward them to an external siem), that I could use to fire an alert when one of the following occurs: 1. an automated pla... See more...
hello, I am trying to gather important logs from the daemons ( in order to forward them to an external siem), that I could use to fire an alert when one of the following occurs: 1. an automated playbook failed to run 2. an action failed to work 3. phantom was not able to ingest all the data forwarded to it and a data was loss 4. a process (daemon) stopped working 5. System Health - CPU, memory usage, disk usage i have read the article "Configure the logging levels for Splunk Phantom daemons (link: https://docs.splunk.com/Documentation/Phantom/4.10.7/Admin/Debug) but I would need to identify the relevant log that tells each.  I would appreciate your help on this.
I have a large KV store lookup (approximately 1.5-2 million rows and 4 columns), and I need to create a search that adds 2 new columns into it from corresponding data. Essentially the lookup is lik... See more...
I have a large KV store lookup (approximately 1.5-2 million rows and 4 columns), and I need to create a search that adds 2 new columns into it from corresponding data. Essentially the lookup is like this: Server Time Variable1 Variable2   and I need it to look like this: Server Time Variable1 Variable2 Variable3 Variable4   My current search is like this: index=index sourcetype=sourcetype | stats count by Server Time Variable3 Variable4 | fields- count | lookup mylookup Server Time OUTPUT Variable1 Variable2 | outputlookup mylookup The problem I'm running into is that the search gets caught on that lookup command for 2+ hours, and I'm not sure why it's taking so long to match that data.  Does anyone have any insight on why that is occurring or how I can restructure my search to accomplish my need in a more efficient way? Or would it be better to try updating the kv store via restAPI from the same script that is generating variable3 and variable4?
How to create total average/median/max of field in a separate table? Thank you in advance | index=testindex | table company, ip, Vulnerability, Score company ip Vulnerability Score C... See more...
How to create total average/median/max of field in a separate table? Thank you in advance | index=testindex | table company, ip, Vulnerability, Score company ip Vulnerability Score CompanyA ip1 Vuln1 2 CompanyA ip1 Vuln2 0 CompanyA ip2 Vuln3 4 CompanyA ip2 Vuln4 2 CompanyA ip3 Vuln5 3 CompanyA ip3 Vuln6 5 Group by IP  => This worked just fine | stats values(company), avg(Score) as AvgScore by ip company ip AvgScore CompanyA ip1 1 CompanyA ip2 3 CompanyA ip3 4 Group by Company   =>  how do I group by company after group by ip (using stats) and put it on a separate table? | stats avg(AvgScore) as Average, avgAvgScore) as Median, max( AvgScore) as Max by company Company Average Median Max CompanyA 2.7 3 4
hi, I see that playbooks ID keep changing all the time. can anyone explain the reasons to it?     thank you,   Daniel    
Hello, I have installed the Add on for Microsoft Azure. How can i get data in from Azure Service Bus?
Hi, I have onboarded my Splunk to the LDAP and subsequently mapped the AD group to the respective roles in Splunk. However, I have noted that the users are not populated or shown in the "Users" in t... See more...
Hi, I have onboarded my Splunk to the LDAP and subsequently mapped the AD group to the respective roles in Splunk. However, I have noted that the users are not populated or shown in the "Users" in the web ui. I have asked user which I have mapped the roles to login (LDAP authentication) and they are able to login and search. There are no existing local account for the user.  Running Splunk Enterprise v9.0.6.  Appreciate if anyone can help with this.  Thanks.
Hi, I have a query that trigger when a user has been added to a specific types of groups. The query depends on lookup with 2 columns inside (one for group_name, Another for Severity). I want to fi... See more...
Hi, I have a query that trigger when a user has been added to a specific types of groups. The query depends on lookup with 2 columns inside (one for group_name, Another for Severity). I want to find any event of adding to one of the monitored groups, But also to enrich my final table with the severity right next to the group_name. I have tried to resolve this using: | join type=left group_name [| inputlookup my_list.csv] | where isnotnull(Severity) But somehow only 2 groups with low severity is being found even though all the groups in the list has its own severity. How can I managed to make my table show the group with its severity?
Dears How to find out what Devices (Switch, Router, etc.), operating systems (Windows, linux, MacOs, etc.), applications(web application, mobile application, etc.) and services (database server, web... See more...
Dears How to find out what Devices (Switch, Router, etc.), operating systems (Windows, linux, MacOs, etc.), applications(web application, mobile application, etc.) and services (database server, web server, etc. ) does Splunk Enterprise Security support? And also the support for Search head and Indexer OS, Is it windows server or Linux? because I could not find out in their documentation  or over the internet   Thank you in advance!    
Hi There, I am currently trying to set up a universal forwarder on a CentOS 7 server. I have extracted the package and am attempting to start the service but receive the following:    ERROR: mgmt ... See more...
Hi There, I am currently trying to set up a universal forwarder on a CentOS 7 server. I have extracted the package and am attempting to start the service but receive the following:    ERROR: mgmt port [8089] - port is already bound. Splunk needs to use this port.   The port is in fact in use and thus the only solution seems to be using a non-default port. Will this cause any issues in the long run or is running Splunk on different ports completely supported? As I have only ever used the default ports for previous installations. Any help/info would be appreciated, Jamie
Hi,  Splunk usually takes the log time event (_time) and parse it to: date_hour, date_mday, date_minute, date_month, date_second, date_wday, date_year   I have found that some of our indexes ... See more...
Hi,  Splunk usually takes the log time event (_time) and parse it to: date_hour, date_mday, date_minute, date_month, date_second, date_wday, date_year   I have found that some of our indexes does not contain this parse only the _time field. What may cause this issue? In addition, I am not sure but I have found something related to "DATETIME_CONFIG = /etc/datetime.xml" might be a good point not much on the internet that explain pretty well how to resolve this. Would appreciate your help here  
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work      But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i w... See more...
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work      But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want compare dstip with ip-dst from misp to detect unusual access activities  , like when dstip=ip-dst : 152.67.251.30 , how can i search this  , misp_instance=IP_Block field=value , i just try some search but it not work:  index=firewall srcip=10.x.x.x  | mispsearch misp_instance=IP_Block field=value | search dstip=ip=dst | table _time dstip ip-dst value action It cant get ip-dst from misp instance , Can anyone help me with this OR can i get some solution to resolve this  Many thanks and Best regards !!
Hi Team, We have a requirement to forward the archived data to external storage (GCS Bucket). I have verified the splunk document but haven't found any luck on this. Kinldy assist me in forwarding ... See more...
Hi Team, We have a requirement to forward the archived data to external storage (GCS Bucket). I have verified the splunk document but haven't found any luck on this. Kinldy assist me in forwarding the archived data to GCS Bucket.
Splunk app for AWS security dashboard shows '0' data, need help to fix this issue   when I try to run/edit query shows error as below   
Hello to all dear friends. Does Splunk have settings to only serve on http version 2.0? Thank you in advance
I deployed splunk universal forwarder 9.1.1 on Linux servers which are running on VPC VSI in IBM Cloud. Some servers are RHEL7 others are RHEL8. These servers send logs to Heavy Forwarder server. A... See more...
I deployed splunk universal forwarder 9.1.1 on Linux servers which are running on VPC VSI in IBM Cloud. Some servers are RHEL7 others are RHEL8. These servers send logs to Heavy Forwarder server. After deployment, the memory usage was coming to high on each server and one of the server went down because of memory leak. CPU usage is also high as expected when the splunk process is running. For example, one of the server's CPU usage increased 30% and consumed 5.7GB memory out of 14GB after the splunk process up. How can I reduce the resource usage?
Hi Team, I'm currently receiving AWS CloudWatch logs in Splunk using the add-on. I'm developing a use case and need to utilize the "event Time" field from the logs. I require assistance in convertin... See more...
Hi Team, I'm currently receiving AWS CloudWatch logs in Splunk using the add-on. I'm developing a use case and need to utilize the "event Time" field from the logs. I require assistance in converting the event Time from UTC to SGT. Sample event Time is in UTC +0   2023-06-30T17:17:52Z 2023-06-30T21:29:53Z 2023-06-30T22:32:53Z 2023-07-01T00:38:53Z 2023-07-01T04:50:52Z 2023-07-01T05:53:55Z 2023-07-01T06:56:54Z 2023-07-01T07:59:52Z 2023-07-01T09:02:56Z 2023-07-01T10:05:54Z 2023-07-01T11:08:53Z 2023-07-01T12:11:53Z   End result:  UTC + 0 to SGT + 8 time. Expected output format is "%Y-%m-%d %H:%M:%S"