All Topics

Top

All Topics

So I have a search I run for an alert which looks for a missing event, it's a simple tstats that shows stuff within the last 30 days I would like to compare the 90 days variant in the same search and... See more...
So I have a search I run for an alert which looks for a missing event, it's a simple tstats that shows stuff within the last 30 days I would like to compare the 90 days variant in the same search and determine the missing events.    Any ideas? 
Hello, I have the below SPL with the two mvindex functions. mvindex position '6' in the array is supposed to apply http statuses for /developers.  mvindex position '10' in the array is supposed... See more...
Hello, I have the below SPL with the two mvindex functions. mvindex position '6' in the array is supposed to apply http statuses for /developers.  mvindex position '10' in the array is supposed to apply http statuses for /apps.  Currently position 6 and 10 are crossing events. Applying to both APIs. Is there anyway I can have one mvindex apply to one command?    (index=wf_pvsi_virt OR index=wf_pvsi_tmps) (sourcetype="wf:wca:access:txt" OR sourcetype="wf:devp1:access:txt") wf_env=PROD | eval temp=split(_raw," ") | eval API=mvindex(temp,4,8) | eval http_status=mvindex(temp,6,10) | search ( "/services/protected/v1/developers" OR "/wcaapi/userReg/wgt/apps" ) | search NOT "Mozilla" | eval API = if(match(API,"/services/protected/v1/developers"), "DEVP1: Developers", API) | eval API = if(match(API,"/wcaapi/userReg/wgt/apps"), "User Registration Enhanced Login", API)  
Field = 1.123456789 Field = 14.123456 Field = 3.1234567 I need to run a query that will return the number of decimals for each record in Field. Expected Result: 9 6 7
  Join TekStream for a demonstration of Splunk Synthetic Monitoring  with real-world examples! Highlights: What are the main features of Splunk Synthetic Monitoring? How can I use SSM ... See more...
  Join TekStream for a demonstration of Splunk Synthetic Monitoring  with real-world examples! Highlights: What are the main features of Splunk Synthetic Monitoring? How can I use SSM to simulate user traffic on my web site? Using SSM to report on SLAs for availability and performance Can I use SSM to test my REST API endpoints? (hint: yes!) How to use SSM with an internal-only site or API     Watch On-Demand  About TekStream TekStream (www.tekstream.com), headquartered in Atlanta, Georgia, helps clients accelerate digital transformation by navigating complex technology environments with a combination of technical expertise and staffing solutions. TekStream provides battle-tested processes and methodologies to help companies with legacy systems get to the cloud faster, so they can become more agile, reduce costs, and improve operational efficiencies. With hundreds of successful deployments, TekStream guarantees on-time and on-budget project delivery and is proud to have 97% customer retention.  
hello all. i have a .csv report that gets generated regularly and that I'm monitoring. working fine there. trying to figure out how to display it because the data(events?) are in columns. is this pos... See more...
hello all. i have a .csv report that gets generated regularly and that I'm monitoring. working fine there. trying to figure out how to display it because the data(events?) are in columns. is this possible? example data here. Hosts server1 server2 IPLevel median median Tip1662 N/A N/A Tip1663 PASSED PASSED Tip1664 FAILED FAILED Tip1666 PASSED PASSED Tip1667 PASSED PASSED Tip1668 PASSED PASSED Tip1669 N/A N/A Tip1671 PASSED PASSED Tip1674 SKIPPED SKIPPED Tip1675 FAILED FAILED Tip1676 PASSED PASSED Tip1677 PASSED PASSED Tip1680 PASSED PASSED Tip1685 PASSED PASSED Tip1687 PASSED PASSED Tip1688 SKIPPED SKIPPED Tip1689 SKIPPED SKIPPED Tip1690 FAILED FAILED
Instrumenting Java Websocket Messaging This article is a code-based discussion of passing OpenTelemetry trace context across STOMP protocol with a brokered websocket. This example uses Spring Boot f... See more...
Instrumenting Java Websocket Messaging This article is a code-based discussion of passing OpenTelemetry trace context across STOMP protocol with a brokered websocket. This example uses Spring Boot for most components. The full source code for this walkthrough is available here. Introduction Tracing in distributed systems can be challenging, and often more so when messaging systems are involved. To stitch together a comprehensive trace, the trace context must be propagated between observed components. In HTTP systems, this is relatively straightforward, and the W3C HTTP headers are readily propagated by OpenTelemetry instrumentation. Messaging systems almost always have headers too, but, depending on the implementation, can be tricky to implement. This is further complicated by the ability of many messaging systems to support one-to-many or even many-to-one models. The OpenTelemetry community has built a detailed set of specifications around messaging systems that you can read here. Not every type of messaging system has been covered yet by OpenTelemetry's Java auto-instrumentation. Depending on the protocols in place, it could be even worse -- we could have the ability to use or see headers with each protocol frame, the envelope that contains messages, and possibly within the message itself! Yikes. When auto-instrumentation is not sufficient or has not yet been built, we can almost always resort to building manual instrumentation to help out. In this session, we will walk through a sample messaging project that does not have comprehensive auto-instrumentation. We will roll up our sleeves and implement manual tracing and context propagation that stitches together pub/sub components into a single trace. Topology We recently had a user who was running into difficulty with websocket instrumentation. In their use case, they are using the STOMP protocol over the websocket with the Spring project's websocket and messaging support. In this configuration, they are able to build a pub/sub messaging framework. The server exposes a websocket. The publisher connects to this websocket in order to send messages, and a subscriber connects to the same websocket in order to receive messages. A small Spring Controller helps convert the type of the message and route to another destination. It looks something like this: The WsPublisher connects to the websocket and publishes/sends JSON messages in STOMP format to /app/tube. The WsServerController contains a message mapping that converts ExampleMessages into TimestampedMessages and sends these to /topic/messages. This acts as our "business" layer, that you could imagine contains much more sophisticated message transformation. The WsSubscriber also connects to the ws and creates a subscription to /topic/messages. When it receives a message, it logs the content. For demo simplicity, we have kept this as a single monolithic java process, but there is nothing in this concept that mandates that. In a real-world deployment, you would expect the 3 components to be deployed as separate processes on separate hosts or containers. Out of the Box Out of the box, this configuration yields pretty uninteresting, broken, unlinked traces. In fact, for a 10 minute run that pub/subbed 300+ messages, we only see 3 traces: The first two look oddly similar: These are being generated by the spring-webmvc instrumentation when the publisher and the subscriber perform their initial connection to UPGRADE the plain HTTP connection to a websocket. That's informative, but not super helpful for long runs and don't give any information about our messaging component. The third trace is being generated by the spring-scheduling instrumentation, presumably to handle some websocket or messaging/routing internals: All of these separate traces, none of them linked. We can make this better! Adding Manual Instrumentation Let's add some manual OpenTelemetry instrumentation to our project in order to pass trace context to our components and to improve our observability story. We start by adding these two OTel dependencies to our build.gradle.kts file:   dependencies { implementation("io.opentelemetry:opentelemetry-sdk:1.21.0") implementation("io.opentelemetry.instrumentation:opentelemetry-instrumentation-annotations:1.22.1") ... }   Publisher We'll start with the publisher. Our scheduled job invokes WsPublisher.sendOne() every 2 seconds. We'll begin by adding the OpenTelemetry @WithSpan annotation to this method:   @WithSpan(value="/app/tube publish", kind = SpanKind.PRODUCER) private void sendOne() { ... }   This annotation tells OpenTelemetry to create a new span every time the sendOne() method is invoked. In keeping with the specification, we indicate that the span should be a PRODUCER. Because this is the start of our pub/sub process, there is no parent and this will be our root span in the trace. The component downstream from us needs to know our trace context, which includes our trace ID + span ID parentage. Right now, however, our context is not propagated into the STOMP data frames and we need to write a little code. To do this, we make a new instance of StompHeaders and set our message destination.   StompHeaders headers = new StompHeaders(); headers.setDestination("/app/tube");   We then leverage the OpenTelemetry API to inject our current trace context into the context propagation mechanism, which is implemented with the TextMapPropagator.   GlobalOpenTelemetry.getPropagators() .getTextMapPropagator() .inject(Context.current(), headers, (carrier, key, value) -> { if(carrier != null){ carrier.set(key, value); } });   What's cool about this approach is that, as a user, we don't have to know the inner workings of the propagation mechanism. For example, we don't ever have to reference the name of the context header or the format of the data inside the propagation value! Our lambda just serves as a little type adapter for the specific implementation. That's it for the publisher. Now we'll move on to the server side router. Server/Router Our routing method on the server side is WsServerController.routeTube(), and it is annotated with Spring's @MessageMapping. From the documentation we have learned that we can add a SimpMessageHeaderAccessor parameter to our method so that we can access the headers present on the incoming message. So our signature looks like this:   @MessageMapping("/tube") public void routeTube(ExampleMessage exampleMessage, SimpMessageHeaderAccessor headerAccessor) { ... }   In order to put ourselves into the correct trace context, we consult the upstream otel documentation and learn that we should implement an interface that lets us get a header value from our SimpleMessageHeaderAccessor. For simplicity we build it as an inner class:   static class HeadersAdapter implements TextMapGetter<SimpMessageHeaderAccessor> { @Override public String get(@Nullable SimpMessageHeaderAccessor carrier, String key) { return carrier.getFirstNativeHeader(key); } @Override public Iterable<String> keys(SimpMessageHeaderAccessor carrier) { return carrier.toMap().keySet(); } }   Now that we have this TextMapGetter impl, we can extract the incoming trace context from OpenTelemetry:   var traceContext = GlobalOpenTelemetry.getPropagators() .getTextMapPropagator() .extract(Context.current(), headerAccessor, new HeadersAdapter());   But what do we do with it? Well, we make it "current", of course, using a java autocloseable try block:   try (var scope = traceContext.makeCurrent()) { ... }   Now that we're parented in the existing scope, we want to create a new span that represents our routing action. You'll frequently encounter this common pattern when doing manual instrumentation like this: get a tracer create a SpanBuilder start the span make the new span the current scope (do business logic) end the span Which in this case looks like this:   var serverSpan = tracer.spanBuilder("/tube process") .setSpanKind(SpanKind.SERVER) .startSpan(); try(Scope x = serverSpan.makeCurrent()){ doRoute(exampleMessage, headerAccessor); } finally { serverSpan.end(); }   I chose SERVER here, but I'm not 100% sure that's right and maybe there's a case to be made for CONSUMER. Down in doRoute() you should notice the same basic header injection method that we used for the publisher:   var headers = new HashMap<>(msgHeaders.toMap()); GlobalOpenTelemetry.getPropagators() .getTextMapPropagator() .inject(Context.current(), headers, (carrier, key, value) -> { if(carrier != null){ carrier.put(key, value); } });   That's all for the server routing side. A new trace will be created, and our context will be put into the routed message headers. That message will be received by our subscriber. Subscriber The subscriber is the last piece of our puzzle, and the business method is WsSubscriber.handleFrame(). This method receives the STOMP message headers (StompHeaders) and the payload message object instance. Just like we did with the router, we extract the incoming trace context from OpenTelemetry. Rather than create an inner class, this time we used an anonymous class and hid this away in a method:   private static Context getTraceContext(StompHeaders headers) { return GlobalOpenTelemetry.getPropagators().getTextMapPropagator() .extract(Context.current(), headers, new TextMapGetter<>() { @Nullable @Override public String get(@Nullable StompHeaders carrier, String key) { return carrier.getFirst(key); } @Override public Iterable<String> keys(StompHeaders carrier) { return headers.toSingleValueMap().keySet(); } }); }   And just like before, we "makeCurrent()" the parent context, then within that context we create our new CONSUMER span:   var traceContext = getTraceContext(headers); try (var scope = traceContext.makeCurrent()) { TimestampedMessage msg = (TimestampedMessage) payload; var tracer = GlobalOpenTelemetry.getTracer("Custom_MessageSubscriber"); var span = tracer.spanBuilder("/topic/messages receive") .setSpanKind(SpanKind.CONSUMER) .setAttribute("x-from", msg.getFrom()) .setAttribute("x-subject", msg.getSubject()) .startSpan(); try(var x = span.makeCurrent()) { <message handling business logic goes here> } finally{ span.end(); } }   It's worth noting that the message from and subject fields are appended to the CONSUMER span in the form of custom attributes. That’s not strictly required, of course, but adds a little more observable detail and depth to the final component in our distributed trace. Improved Traces Now that we've manually instrumented our code, we run the app for a while and see how the trace has improved. Our messaging trace now looks something like this: Much nicer! We see that the topmost root span is created from our publisher, and it has a child span for the processor/router and another child span for the final subscriber. The span details for the publisher show the exact class and method that originated the message: And finally, our subscriber's consumer span, that shows the custom message attributes:  Conclusion We've walked through an example of adding manual OpenTelemetry instrumentation to our Spring code in order to trace a websocket-based STOMP protocol message from a publisher, through a processor, and finally to the receiver. We've demonstrated specific usage of OpenTelemetry Java APIs and how to achieve trace context propagation by reading and writing messaging headers via implementation-specific adaptors. We rolled up our sleeves and got our hands dirty with some code, but it wasn't that bad and the results are looking good. This much deeper level of observability is a wildly powerful tool that helps developers and operators better understand the flow of data through their systems. We've hit our length limit for now. See the GitHub repo for a list of improvements and resources. Thanks for reading!
I have an alert configured, the search finds an error in a windows event log, the alert is set up to trigger a notification email. Is there a way to have the alert run a  powershell script when the e... See more...
I have an alert configured, the search finds an error in a windows event log, the alert is set up to trigger a notification email. Is there a way to have the alert run a  powershell script when the error is found?
I am looking to create a simple pie chart that contrasts the total number of users during any give timeframe vs how many logged into a specific app. I am probably over thinking this, but what I did i... See more...
I am looking to create a simple pie chart that contrasts the total number of users during any give timeframe vs how many logged into a specific app. I am probably over thinking this, but what I did is a search for distinct_count of users during a period and then joined another search that calculates the distinct_count of users that logged into a specific app over that same period. For example:  index="okta" "outcome.result"=SUCCESS displayMessage="User single sign on to app" | stats dc(actor.alternateId) as "Total Logins" | join [ | search index="okta" "target{}.displayName"="Palo Alto Networks - Prisma Access" "outcome.result"=SUCCESS displayMessage="User single sign on to app" | stats dc(actor.alternateId) as "Total Palo Logins"] | table "Total Palo Logins" "Total Logins" Only issue is I can't get a proper pie graph of the percentage of Palo Logins vs Total Logins. Any help would be appreciated. I am sure I am missing something simple here. 
Is there anyway to get http request logs easily from Splunk created apps?  There is a failure in communicating w/ zscaler. The error msg seems to be generated on their side, but they are pushing h... See more...
Is there anyway to get http request logs easily from Splunk created apps?  There is a failure in communicating w/ zscaler. The error msg seems to be generated on their side, but they are pushing hard for the body of the message that was sent to their api.  Since the app was created by Splunk, I'm dis-inclined to hack that into their app just to get this intermittent data for Zscaler.  any suggestions by the community? 
I have two looksups that have a lists of subnets and name of the subnets. One lookup (subnet1.csv) as a field called name and subnet and the other (subnet2.csv) has fields named Name and Range. I wou... See more...
I have two looksups that have a lists of subnets and name of the subnets. One lookup (subnet1.csv) as a field called name and subnet and the other (subnet2.csv) has fields named Name and Range. I would like to combine the two. So far I have this: | inputlookup subnet1.csv | lookup subnet2.csv Name Range OUTPUT Range AS Subnet | table Name Subnet This doesn't seem to work. When I run it, I only get the results from subnet1.csv and I can't seem to figure out why. 
     Without the ability to remove testing errors in uptime calculation when reporting monthly numbers, I spend a lot of time doing it manually (multiple teams).  To alleviate this, I plan on writing... See more...
     Without the ability to remove testing errors in uptime calculation when reporting monthly numbers, I spend a lot of time doing it manually (multiple teams).  To alleviate this, I plan on writing a Pandas script to automate this process, but I need to export a CSV with a column that includes success or failure of each run (HTTP Check).  I fail to see CSV as an export option aside from the comparison reports.  The comparison reports only allow me to use RB tests.  Can anyone direct me to a mechanism to export run data (success/failure) for HTTP checks via CSV? Legacy Synthetics (Rigor)
Hi, We have a new implementation of Splunk ITSI, running on Splunk Cloud, in a new search head. Since the day the search head was installed, every search that we run is followed by a warning messag... See more...
Hi, We have a new implementation of Splunk ITSI, running on Splunk Cloud, in a new search head. Since the day the search head was installed, every search that we run is followed by a warning message related to a missing eventttype. Warning message is similar to below: "[idx-1.my-company.splunkcloud.com,idx-2.my-company.splunkcloud.com] Eventtype 'wineventlog-ds' does not exist or is disabled." Anyone have ever experienced this behavior on Splunk ITSI? Or have any knowledge of which is the source app/add-on that contains this eventtype that is being referenced by ITSI? Thanks!
I have a Min Host alert that was deleted that is triggering and spamming our support systems.   How can I stop this from occurring.      The alert does not appear in the Active Alerts or Detectors li... See more...
I have a Min Host alert that was deleted that is triggering and spamming our support systems.   How can I stop this from occurring.      The alert does not appear in the Active Alerts or Detectors lists.     I recreated the alert with the same name but the old code is still triggering.   Is there a way to disable a deleted alert or flush it from the SignalFx system? Thanks   -Sean
 I have logs that get generated every 5 min.         time=2023-02-06 00:01:00, app=bema, currentUseCount=7 time=2023-02-06 00:06:00, app=bema, currentUseCount=7 time=2023-02-06 00:11:00, app=bema,... See more...
 I have logs that get generated every 5 min.         time=2023-02-06 00:01:00, app=bema, currentUseCount=7 time=2023-02-06 00:06:00, app=bema, currentUseCount=7 time=2023-02-06 00:11:00, app=bema, currentUseCount=10 time=2023-02-06 00:16:00, app=bema, currentUseCount=8 time=2023-02-06 00:21:00, app=ash, currentUseCount=12 time=2023-02-06 00:26:00, app=ash, currentUseCount=10 time=2023-02-06 00:31:00, app=ash, currentUseCount=8 time=2023-02-06 00:36:00, app=ash, currentUseCount=9      How can i calculate the hours spent on each app based on the above logs   
I have the following search query that I've been using so far to display the unique values in lists of Ids: <search> | eval ids=if(group_id >= 4, id, '') | eval type_x_ids=if((group_id >= 4 AND is... See more...
I have the following search query that I've been using so far to display the unique values in lists of Ids: <search> | eval ids=if(group_id >= 4, id, '') | eval type_x_ids=if((group_id >= 4 AND is_type_x="true"), id, '') | eval non_type_x_ids=if((group_id >= 4 AND is_type_x="false"), id, '') | stats count as total_count, values(type_x_ids) as list_of_x_ids, values(non_type_x_ids) as list_of_non_x_ids, values(ids) as list_of_all_ids by some_characteristic Now that I've seen which Ids are in the lists, I would like to change the query to count the number of unique ids in the lists split up by some characteristic. mvcount doesn't seem to work in the stats command the way I tried it: attempt 1: | stats count as total_count, mvcount(type_x_ids) as num_of_x_ids, mvcount(non_type_x_ids) as num_of_non_x_ids, mvcount(ids) as num_of_all_ids by some_characteristic attempt 2: | stats count as total_count, mvcount(values(type_x_ids)) as num_of_x_ids, mvcount(values(non_type_x_ids)) as num_of_non_x_ids, mvcount(values(ids)) as num_of_all_ids by some_characteristic How should I write the stats line so I would get a table that shows the number of unique of Ids in each list of Ids split by some characteristic? I would like the following fields in my resulting table: |  some_characteristic  |  total_count  |  num_of_x_ids  |  num_of_non_x_ids  |  num_of_all_ids  |  I would appreciate any help you can give!!
Hi, I am working on a playbook which will check for any new artifact that has been added during the playbook execution. It must be repeatedly checking for any new artifacts. I am looking to add cus... See more...
Hi, I am working on a playbook which will check for any new artifact that has been added during the playbook execution. It must be repeatedly checking for any new artifacts. I am looking to add custom code that will be triggered by any addition of new artifacts.     Regards Sujoy
index=akamai "httpMessage.host"="*" "httpMessage.path"="/auth/realms/user/login-actions/authenticate" "*User-Agent:*" | spath "attackData.clientIP" | rename "attackData.clientIP" as ipAddress, http... See more...
index=akamai "httpMessage.host"="*" "httpMessage.path"="/auth/realms/user/login-actions/authenticate" "*User-Agent:*" | spath "attackData.clientIP" | rename "attackData.clientIP" as ipAddress, httpMessage.host as Host, httpMessage.path as Path, User-Agent as "User Agent" | where [search index=keycloak type=LOGIN* [ inputlookup fraud_accounts.csv | rename "Account Login" as customerReferenceAccountId, "Input IP" as ipAddress | return 1000 customerReferenceAccountId ] | return 10000 ipAddress ] | table ipAddress, Host, Path, _time, "Account ID", "User Agent"
Hello Splunkees, I have a requirement where I need to calculate the availability or uptime percentage of some Critical APIs. We ingest those API logs in Splunk and it tells us about the throughput,... See more...
Hello Splunkees, I have a requirement where I need to calculate the availability or uptime percentage of some Critical APIs. We ingest those API logs in Splunk and it tells us about the throughput, latency and HTTP status codes. Is there a way to calculate the availability of any API using these metrics? I mean something like calculating the success and failure rate and then based on that come up with a number to say how much available my API is. Does anyone have any basic query which can calculate that?  I have created something like below to calculate the success and failure rates -   index=myapp_prod sourcetype="service_log" MyCriticalAPI Status=200 | timechart span=15m count as SuccessRequest | appendcols [ search index=myapp_prod sourcetype="service_log" MyCriticalAPI NOT Status=200 | timechart span=15m count as FailedRequest] | eval Total = SuccessRequest + FailedRequest | eval successRate = round(((SuccessRequest/Total) * 100),2) | eval failureRate = round(((FailedRequest/Total) * 100),2)    
Hello, would you copy the app's full folder in another location as backup, extract new app from tgz in master-apps or shcluster/apps then copy your local folder from backup to new one? Thanks.
Hello  I find it difficult to stop the search when I got first result in multisearch. I tried |head 1  but it can't be implemented in multisearch  Is there anyway to stop it to enhance my search... See more...
Hello  I find it difficult to stop the search when I got first result in multisearch. I tried |head 1  but it can't be implemented in multisearch  Is there anyway to stop it to enhance my search efficiency? Because I got over 10 indexes which has over 10 million entires in each index to search. |multisearch [index = A |search ....] [index = B |search ....] [index = C |search ....] [index = D |search ....] .... Thank you so much.