All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Query: index=xxx  application_code=mobile  NOT   feature  |stats count by code message |sort -count |eval message-substr(message, 1, 40) output: code message count mobile-job... See more...
Query: index=xxx  application_code=mobile  NOT   feature  |stats count by code message |sort -count |eval message-substr(message, 1, 40) output: code message count mobile-job-115 application error occured 100 mobile-app-180 application is stable 240 app-job-800 information good 34 project-job-100 system error occured 10 project-job-100    system error occured 20 project-job-100    system error occured 34 project-job-100    system error occured 23 project-job-100    system error occured 50  expected output: code message count mobile-job-115 application error occured 100 mobile-app-180 application is stable 240 app-job-800 information good 34 project-job-100 system error occured 137  i want to get my table display count as one value for similar messages like for example(system error occured) as shown above.
I've seen similar posts but most are without an answer or the answer doesn't apply to me. I'm sending a valid blob of JSON to HEC, and am seeing this error in the log: ERROR JsonLineBreaker [2809 p... See more...
I've seen similar posts but most are without an answer or the answer doesn't apply to me. I'm sending a valid blob of JSON to HEC, and am seeing this error in the log: ERROR JsonLineBreaker [2809 parsing] - JSON StreamId:0 had parsing error:Unexpected character while looking for value: 'm' - data_source="http:***", data_host="compy-manjaro", data_sourcetype=" _json"   Here is my HEC token's config:   Here is the config of the related index (type is "metrics"):   Here's an example payload:     { "event": "metric", "time": 1678911825, "host": "compy-manjaro", "fields": { "app.name": "my-app", "app.version": "v0.0.1 (unknown@unknown)", "health:db": 0, "health:diskSpace": 0, "health:mail": 0, "health:ping": 0, "application.ready.time:value": 15603.0, "application.started.time:value": 15593.0, "disk.free:value": 2.210336768E10, "disk.total:value": 2.4284653568E11, "executor.active:value": 0.0, "executor.completed:count": 0.0, "executor.pool.core:value": 0.0, "executor.pool.max:value": 2.147483647E9, "executor.pool.size:value": 0.0, "executor.queue.remaining:value": 2.147483647E9, "executor.queued:value": 0.0, "hikaricp.connections.acquire:count": 12.0, "hikaricp.connections.acquire:max": 0.0, "hikaricp.connections.acquire:total": 8.146637, "hikaricp.connections.active:value": 0.0, "hikaricp.connections.creation:count": 0.0, "hikaricp.connections.creation:max": 0.0, "hikaricp.connections.creation:total": 0.0, "hikaricp.connections.idle:value": 11.0, "hikaricp.connections.max:value": 40.0, "hikaricp.connections.min:value": 10.0, "hikaricp.connections.pending:value": 0.0, "hikaricp.connections.timeout:count": 0.0, "hikaricp.connections.usage:count": 12.0, "hikaricp.connections.usage:max": 0.0, "hikaricp.connections.usage:total": 59.0, "hikaricp.connections:value": 11.0, "jdbc.connections.active:value": 0.0, "jdbc.connections.idle:value": 11.0, "jdbc.connections.max:value": 40.0, "jdbc.connections.min:value": 10.0, "jvm.buffer.count:value": 17.0, "jvm.buffer.memory.used:value": 0.0, "jvm.buffer.total.capacity:value": 0.0, "jvm.classes.loaded:value": 22964.0, "jvm.classes.unloaded:count": 6.0, "jvm.gc.live.data.size:value": 0.0, "jvm.gc.max.data.size:value": 8.405385216E9, "jvm.gc.memory.allocated:count": 1.023410176E9, "jvm.gc.memory.promoted:count": 1.22555392E8, "jvm.gc.overhead:value": 0.005311596570632951, "jvm.gc.pause:count": 9.0, "jvm.gc.pause:max": 0.0, "jvm.gc.pause:total": 175.0, "jvm.memory.committed:value": 1.6449536E7, "jvm.memory.max:value": -1.0, "jvm.memory.usage.after.gc:value": 0.01895299976219436, "jvm.memory.used:value": 1.59307264E8, "jvm.threads.daemon:value": 45.0, "jvm.threads.live:value": 68.0, "jvm.threads.peak:value": 69.0, "jvm.threads.states:value": 0.0, "logback.events:count": 0.0, "process.cpu.usage:value": 0.007488087134104833, "process.files.max:value": 524288.0, "process.files.open:value": 373.0, "process.start.time:value": 1.678911778094E12, "process.uptime:value": 47711.0, "system.cpu.count:value": 8.0, "system.cpu.usage:value": 0.1834410064603876, "system.load.average.1m:value": 4.71533203125, "tomcat.sessions.active.current:value": 0.0, "tomcat.sessions.active.max:value": 0.0, "tomcat.sessions.alive.max:value": 0.0, "tomcat.sessions.created:count": 0.0, "tomcat.sessions.expired:count": 0.0, "tomcat.sessions.rejected:count": 0.0 } }      
I have the install file for ver 3.1 and it had the eventgen.conf. 5.1 does not have eventgen.conf. I do not see in the release notes when this was removed. How do I go about using this app to generat... See more...
I have the install file for ver 3.1 and it had the eventgen.conf. 5.1 does not have eventgen.conf. I do not see in the release notes when this was removed. How do I go about using this app to generate events?
Hello, This is regarding AppD integration with cognos analytics 11.1.7 After setting up both machine and app agent, everything seems to be working ok with both application and AppD monitoring but i... See more...
Hello, This is regarding AppD integration with cognos analytics 11.1.7 After setting up both machine and app agent, everything seems to be working ok with both application and AppD monitoring but i'm seeing the following errors in the application logs. Has anyone come across these or have any inputs on how to resolve them. This is the AppD version: appd_22.5.0.tar.gz 2023-03-15T18:07:00.263+0000 ERROR com.cognos.pogo.bibus.BIBusCommand [Default Executor-thread-47] Cs2Gsd2yvjv9s4lwq2lqq449vM4lsq2M2yl8yGdv Cs2Gsd2yvjv9s4lwq2lqq449vM4lsq2M2yl8yGdv Mhsl9sGh9w2898CG9yMMjwwsvyCvvGlhjC8jlwdw NA 10.60.19.137 40442 NA IBM Cognos 9848 Set-Cookie not added to the response envelope for following: set-cookie: ADRUM_BTa=R:52|g:229b0857-9dbd-4579-b29e-89375bc624ac|n:xxxxx-test_469db7a0-d416-4fa5-835e-e26c3316559d; Expires=Thu, 01 Dec 1994 16:00:00 GMT; Path=/, ADRUM_BT1=R:52|i:14977382|e:1052; Expires=Thu, 01 Dec 1994 16:00:00 GMT; Path=/ org.apache.commons.httpclient.HttpException: Unable to parse expiration date parameter: "Thu" at com.cognos.pogo.util.MyCookie.parse(MyCookie.java:148) ~[p2pd.jar:?] at com.cognos.pogo.util.MyCookie.parse(MyCookie.java:59) ~[p2pd.jar:?] at com.cognos.pogo.pdk.BIBusEnvelope.addSetCookies(BIBusEnvelope.java:1140) ~[p2pd.jar:?] at com.cognos.pogo.bibus.BIBusCommand.processSetCookie(BIBusCommand.java:544) [p2pd.jar:?] at com.cognos.pogo.bibus.BIBusCommand.handleResponse(BIBusCommand.java:535) [p2pd.jar:?] at com.cognos.pogo.bibus.BIBusCommand.processResponse(BIBusCommand.java:263) [p2pd.jar:?] at com.cognos.pogo.bibus.BIBusCommand.executeCommand(BIBusCommand.java:218) [p2pd.jar:?] at com.cognos.pogo.bibus.BIBusCommand.execute(BIBusCommand.java:201) [p2pd.jar:?] at com.cognos.pogo.handlers.contentmanager.CMHandler.executeCmCommand(CMHandler.java:168) [p2pd.jar:?] at com.cognos.pogo.handlers.contentmanager.CMHandler.invokeImpl(CMHandler.java:152) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.handlers.logic.ChainHandler.invokeImpl(ChainHandler.java:53) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.auth.NewAuthHandler.invokeImpl(NewAuthHandler.java:126) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.handlers.logic.IfHandler.invokeImpl(IfHandler.java:56) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.handlers.logic.ChainHandler.invokeImpl(ChainHandler.java:53) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.impl.PogoEngineImpl.invokeHandler(PogoEngineImpl.java:158) [p2pd.jar:?] at com.cognos.pogo.handlers.engine.ServiceLookupHandler.invokeImpl(ServiceLookupHandler.java:127) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.handlers.logic.ChainHandler.invokeImpl(ChainHandler.java:53) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.handlers.performance.PerformanceIndicationHandler.invokeImpl(PerformanceIndicationHandler.java:118) [p2pd.jar:?] at com.cognos.pogo.pdk.BasicHandler.invoke(BasicHandler.java:203) [p2pd.jar:?] at com.cognos.pogo.impl.PogoEngineImpl.service(PogoEngineImpl.java:126) [p2pd.jar:?] at com.cognos.pogo.transport.PogoServlet.processRequest(PogoServlet.java:273) [p2pd.jar:?] at com.cognos.pogo.transport.PogoServlet.doPost(PogoServlet.java:736) [p2pd.jar:?] at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) [com.ibm.websphere.javaee.servlet.3.1_1.0.58.jar:?] at com.cognos.pogo.pdk.performance.servlet.PerformanceIndicatorWrappedServlet.service(PerformanceIndicatorWrappedServlet.java:31) [p2pd.jar:?] at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) [com.ibm.websphere.javaee.servlet.3.1_1.0.58.jar:?] at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1258) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:746) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:443) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterChain.invokeTarget(WebAppFilterChain.java:193) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:98) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.bi.logging.glug.support.web.BITransactionFilter.doFilter(BITransactionFilter.java:68) [glug-support.jar:11.1.7-22091315] at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:91) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.bi.logging.glug.support.web.BITransactionFilter.doFilter(BITransactionFilter.java:68) [glug-support.jar:11.1.7-22091315] at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:91) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:1002) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1140) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1011) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:938) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279) [com.ibm.ws.webcontainer_1.1.58.jar:?] at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1159) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:428) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:387) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:566) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:500) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:360) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:327) [com.ibm.ws.transport.http_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:167) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:75) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:958) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1047) [com.ibm.ws.channelfw_1.0.58.jar:?] at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:238) [com.ibm.ws.threading_1.1.58.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_311] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_311] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_311]
Hi, Let's say that I have a database with a table like this : I would like to know if it is possible to acces and store the "New" and "Closed" status in some variables in order to draw a ... See more...
Hi, Let's say that I have a database with a table like this : I would like to know if it is possible to acces and store the "New" and "Closed" status in some variables in order to draw a chart based on how much new items were created and how much were closed in the same time ? Thank you for your help, Oulebsir Kiman
we are trying to follow this example . however we are getting this error.  {"message": "Error provisioning account on any cluster" Ofcourse we are using the correct account name api key  any ideas... See more...
we are trying to follow this example . however we are getting this error.  {"message": "Error provisioning account on any cluster" Ofcourse we are using the correct account name api key  any ideas? we are using our controller url  POST https://analytics.api.example.com/events/publish/{schemaName} X-Events-API-AccountName:<global_account_name> X-Events-API-Key:<api_key> Content-Type: application/vnd.appd.events+json;v=2 Accept: application/vnd.appd.events+json;v=2 { "schema" : { "account": "integer", "amount": "float", "product": "string" } }
Hello!  One of our customer has a problem with this executable "C:\Program Files\SplunkUniversalForwarder_script\files\blat\blat.exe" that tries to launch this command "C:\Program Files\SplunkUni... See more...
Hello!  One of our customer has a problem with this executable "C:\Program Files\SplunkUniversalForwarder_script\files\blat\blat.exe" that tries to launch this command "C:\Program Files\SplunkUniversalForwarder_script\files\blat\blat.exe" -install mailrelay2.domain.com hostname@domain.com  Can you help me to understand if this process is generated from Splunk or if it is a custom process? Thank you, Mauro
My App is failing to upgrade readiness for python3. I am getting an error in the cim_actions.py file. I have upgraded the readiness app to a new version. I even tried creating a new app from the scr... See more...
My App is failing to upgrade readiness for python3. I am getting an error in the cim_actions.py file. I have upgraded the readiness app to a new version. I even tried creating a new app from the scratch even if that's failing the readiness check.    
I am needing to view the cost per application in Splunk to compare to different products. For instance, I am needing to see how much it is costing us to ingest information for the M365 application fo... See more...
I am needing to view the cost per application in Splunk to compare to different products. For instance, I am needing to see how much it is costing us to ingest information for the M365 application for Splunk. Where can I do this, and what level of permissions do I need? Also, how do I delete an application from Splunk Cloud so that we are no longer billed for the data?   Thank you
Hello, I'm building a report to list all phishing and malware threat detections by sender, classification, and threat url. The data contains two types of events "clicksAllowed" and "clicksBlocked".... See more...
Hello, I'm building a report to list all phishing and malware threat detections by sender, classification, and threat url. The data contains two types of events "clicksAllowed" and "clicksBlocked". I want to add a list of recipients if their click was allowed "clicksAllowed" and I'm struggling with how to structure my query. I'm currently trying to do this with stats and eval (I thought about using subsearch too maybe), hopefully, I'm on the right track but I can't figure out how to show only the recipients who clicked while still showing counts of how many clicks were allowed and blocked. Current search (without who clicked): index=tap sourcetype="pp_tap_siem" classification IN (phish, malware) threatStatus=active | eval time=strftime(_time,"%m/%d/%y @ %H:%M:%S") | stats earliest(time) AS First_Seen, latest(time) AS Last_Seen count(eval(eventType="clicksPermitted")) AS Clicks_Permitted, count(eval(eventType="clicksBlocked")) AS Clicks_Blocked, values(threatURL) AS TAP_Link BY sender, classification, url | table First_Seen, Last_Seen, classification, sender, Clicks_Permitted, Clicks_Blocked, AT_Risk_Users, url, TAP_Link | sort -Last_Seen Output looks like: First_Seen Last_Seen classification sender Clicks_Permitted Clicks_Blocked  AT_Risk_Users url TAP_Link 03/14/23 @ 17:52:36 03/14/23 @ 17:52:36 phish badguy@domain.com 1 1 list of 1 person here hxxp://baddomain.com hxxp://link_tothreatintel_webportal.com/uniqueguid 01/05/23 @ 12:34:44 01/05/23 @ 17:44:41 phish badguy2@domain.com 39 3 list of 39 people here hxxp://baddomain2.com hxxp://link_tothreatintel_webportal.com/uniqueguid 01/18/23 @ 15:43:20 02/16/23 @ 22:46:19 malware badguy3@domain.com 4 0 list of 4 people here hxxp://baddomain.com hxxp://link_tothreatintel_webportal.com/uniqueguid
I want to blacklist "filtered__results.json" file and allow to ingest by Splunk anything  "filtered__results.json265964694"    How do I do ? what is correct Rex for blacklisting "filtered__results... See more...
I want to blacklist "filtered__results.json" file and allow to ingest by Splunk anything  "filtered__results.json265964694"    How do I do ? what is correct Rex for blacklisting "filtered__results.json"
Hi, While trying to configure the rapid7intsightsvm app the data is not indexing to index which  I have configured. Name InsightVM_Assets Interval 3600 Full import schedule (Days) 0 Index ... See more...
Hi, While trying to configure the rapid7intsightsvm app the data is not indexing to index which  I have configured. Name InsightVM_Assets Interval 3600 Full import schedule (Days) 0 Index test Status false InsightVM Connection Splunk_Rapid7 Asset Filter Site IN [Rapid7] Import vulnerabilities 1 Include same vulnerabilities 0 what changes we need to get the data in to  test index ??
I'm new to Splunk so I apologize if this is very obvious, but I haven't seen anything that seems like it fits my needs exactly in the community. I'm trying to build a dashboard that will display temp... See more...
I'm new to Splunk so I apologize if this is very obvious, but I haven't seen anything that seems like it fits my needs exactly in the community. I'm trying to build a dashboard that will display temperature values from sensors based on messages received in a stream.  The messages come in with a time, a sensor id/name, and a temperature.  For any given period of time I wont know how many sensors I will receive temperatures from.  Currently my query is based on a table that splits the sensors into columns and then adds the values based on time:    This kind of works for me - except I need my dashboard to look like this:    The line chart is probably good enough, because I can set the nullvaluemode to connect, which covers the gaps in data. But the Singles and Sparklines at the top are not very useful. Basically I'm looking for any suggestions on how I can improve the query to make that top section work better. I've tried to keep track of a "lastKnownTemp" using last() to use to fill in the null values, but I don't know how to do it for an unknown number of sensors. Ideally I think this would be the way I would want to go if someone knew of a way to accomplish this? I've considered using transactions to split the messages by sensor id, but then when I get a single event that has a bunch of events inside, I don't really know what to do with them.  Any suggestions or information would be greatly appreciated. 
I have the following data in a Cell that reads  1.01.01 Example App AL11111 Is there a way I can split the data into 3 separate columns, there are no delimiters, I thought using space but I have ... See more...
I have the following data in a Cell that reads  1.01.01 Example App AL11111 Is there a way I can split the data into 3 separate columns, there are no delimiters, I thought using space but I have entries that do have spaces in the middle section. e.g.  1.1.1.10 Example App AL11111   One thing to note, the initial numbers will always be 8 characters long and the AL***** will always be 7 characters Thanks
Indicator "ingestion_latency_gap_multiplier" exceeded configured value. The observed value is 98344.   Is this normal? We have Splunk Universal Forwarder installed on all systems and forwarding E... See more...
Indicator "ingestion_latency_gap_multiplier" exceeded configured value. The observed value is 98344.   Is this normal? We have Splunk Universal Forwarder installed on all systems and forwarding Event logs. Is there any way to improve ingestion latency?
We did a linux patching cycle about a month ago. We have a 10 indexer 2 site cluster with 3:3 search and replication. I put the cluster into maintenance mode, stop splunk on an indexer, patch, reboot... See more...
We did a linux patching cycle about a month ago. We have a 10 indexer 2 site cluster with 3:3 search and replication. I put the cluster into maintenance mode, stop splunk on an indexer, patch, reboot, wait until the indexer is up on the cluster manager and then repeat the cycle for the remainder of the indexers. Usually after the patching is done the bucket fixup tasks are a small amount and rapidly resolve.  This past patching cycle we had a over 10k that are slowly resolving (maybe 30 a day). If I resync a bucket that number immediately drops by the amount I resync. I can only do 20 at a time because the cluster manager only allows 20 per page. That approach is silly with having to do that 10k times (currently sitting at 7k).  I saw a community post about doing a rolling restart of the cluster will resolve this issue but it didn't. I did notice that there is 18 indexes (out of 126) that have access buckets. Wasn't sure if that affects anything. Is there a way to resync buckets more easily? Maybe 100 at a time without having to click through prompts?
Hello. So I'm trying to create a report which will send a daily email. I'm using the action "Send Email" to send the report. In there I have two options set: - Inline Table - Attach CSV   My qu... See more...
Hello. So I'm trying to create a report which will send a daily email. I'm using the action "Send Email" to send the report. In there I have two options set: - Inline Table - Attach CSV   My question is. Can I for example have the "Inline Table" limited to lets say 10 top results? The thing what I want to achieve here, is to have a short summary in the e-mail body (the 10 top results) and the full search result in the CSV file (which can have hundreds of rows) Is this even possible in this one action?
Hi! im working on an alert for access from different countries for certain users in a short time period. The alert and the search works fine but i will like to show more info when the alert triggers ... See more...
Hi! im working on an alert for access from different countries for certain users in a short time period. The alert and the search works fine but i will like to show more info when the alert triggers (source ip and time).   Here a sample of the event: 09:09:55,377 INFO [XX.XXX.XXXXXXX.cbapi.dao.login.LoginDAOImpl] (default task-34878) Enviamos parámetros: [authTipoPassword=E, authDato=4249929, authTipoDato=D, nroDocEmpresa=80097256-2, tipoDocEmpresa=D, authCodCanal=999, authIP=45.170.128.191, esDealer=N, dispositivoID=40ee57e1-e5eb-4b14-b7ef-9f0f8ccdf6c 2, dispositivoOS=null ] Here the search: index="XXXX" host="XXX.XXX.-*" sourcetype=XXXXXXCBAPI*  authDato authIP dao.login.LoginDAOImpl authIP=* authCodCanal=999 | iplocation authIP | eval Country = if(isnull(Country) OR Country="", "Unknown", Country) | stats dc(Country) AS count values(Country) AS country values(authIP) as authIP latest(_time) AS latest BY authDato | where count > 1 | eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | sort - latest With this i get a result like this: authdato | count | Country | authIP | latest 2363494 | 2 |   Argentina | 170.51.250.39 | 2023-03-15 09:09:09                               Paraguay | 170.51.55.186 the thing is.. the ip address aren't aligned with the country for that ip, neither the time is aligned with the last Country or ip address. Ive tried several things but still can't figure out how to correctly present the results (in the right order i mean)    
I am looking for the search query which can give me a result of any docker container connections to unusal ports.  Tired the below query  index=aws_eks_* responseObject.spec.limits{}.type=*contain... See more...
I am looking for the search query which can give me a result of any docker container connections to unusal ports.  Tired the below query  index=aws_eks_* responseObject.spec.limits{}.type=*container* | NOT search port IN (80,443,8080,8443,3000,330)=80 OR port=443 OR port=8080 OR port=8443 OR port=3000 OR port=3306)
The above snippet consists of the raw data in the events in our splunk environment. Need Help in extracting the jobIds (that are highlighted) in the raw data and add them as a separate field l... See more...
The above snippet consists of the raw data in the events in our splunk environment. Need Help in extracting the jobIds (that are highlighted) in the raw data and add them as a separate field like below using SPL in user interface.