All Topics

Top

All Topics

I find myself using Splunk Cloud and I see that the licensing is being exceeded on daily. In the Cloud Monitoring Console APP there is no option that allows me to see what the sourcetype is and thi... See more...
I find myself using Splunk Cloud and I see that the licensing is being exceeded on daily. In the Cloud Monitoring Console APP there is no option that allows me to see what the sourcetype is and this would help me to know exactly which source has increased usage.
I'm looking to create a search for users that have reset their password and then within a certain amount of time logged off.    Anybody know the best way of producing a search for this?   Muc... See more...
I'm looking to create a search for users that have reset their password and then within a certain amount of time logged off.    Anybody know the best way of producing a search for this?   Much appreciated for any help with his. 
Hello, We are still facing the following issue when we put in maintenance mode our Indexer Cluster and we stop one Indexer. Basically all the Indexers stop ingesting data, increasing their queues... See more...
Hello, We are still facing the following issue when we put in maintenance mode our Indexer Cluster and we stop one Indexer. Basically all the Indexers stop ingesting data, increasing their queues, waiting for splunk-optimize to finish the job. This usually happens when we stop the Indexer after a long time since last time. Here below an example of the error message that appears on all the Indexers at once, on different bucket directory:     throttled: The index processor has paused data flow. Too many tsidx files in idx=myindex bucket="/xxxxxxx/xxxx/xxxxxxxxxx/splunk/db/myindex/db/hot_v1_648" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised.     Checking further, going into the bucket directory, I was able to see hunderds of .tsidx files. What splunk-optimize does is to merge those .tsidx files.   We are running Splunk Enterprise 9.0.2 and: - on each Indexer the disk reach 150K IOPS - we already performed this set-up that improved the effect, but hasn't solved it:     indexes.conf [default] maxRunningProcessGroups = 12 processTrackerServiceInterval = 0     Note: we kept maxConcurrentOptimizes=6 as default, because we have to keep maxConcurrentOptimizes <= maxRunningProcessGroups (this has been also confirmed by Splunk support, that informed me maxConcurrentOptimizes is no longer used (or used with less effect) since 7.x and it is there mainly for compatibility) - I know since 9.0.x there is the possibility to manually run splunk-optimize over the affected buckets, but this seems to me more a workaround than a solution. Considering a deployment can have multiple Indexers it is not straightforward   What do you suggest to solve this issue?   Thanks a lot, Edoardo
Hi, I want to create a search out of the below event, to raise an alert if the particular system having the label lostinterface or label is  not there  and in profiles we have 2 values i.e  tndsubne... See more...
Hi, I want to create a search out of the below event, to raise an alert if the particular system having the label lostinterface or label is  not there  and in profiles we have 2 values i.e  tndsubnet1 and  tndsubnet2, how we can make the search to seperate out the systems in tndsubnets 1 and tndsubnets 2 accordingly to make a search Thanks..
I have a query where I'm looking for users who are performing large file transfers (>50MB).  This query runs every day and as a result we have hosts that are legit.  These hosts names are extracted f... See more...
I have a query where I'm looking for users who are performing large file transfers (>50MB).  This query runs every day and as a result we have hosts that are legit.  These hosts names are extracted from the dst_host field of the results from my search.  As we compile a list of valid hosts, we can simply add that to the query to be excluded from the search like:  index=* sourcetype=websense* AND (http_method="POST" OR http_method="PUT" OR http_method="CONNECT") AND bytes_out>50000000 NOT (dst_host IN (google.com, webex.com, *.zoom.us) OR dst_ip=1.2.3.4) I know there's a better way to add the excluded host or IPs in a file that I can query against to exclude but I'm not sure how to do that.  I don't want to update the query everyday with hosts that should be excluded but rather a living document that can be updated with hosts or IPs that should excluded. Can someone send point me in the right direction for this issue.
Hi ,  I want to rename to Required Parameters Longitude and Latitude are missing or invalid to a new value Required Parameters missing.   index="****" k8s.namespace.name="*****" "Error" OR "Exc... See more...
Hi ,  I want to rename to Required Parameters Longitude and Latitude are missing or invalid to a new value Required Parameters missing.   index="****" k8s.namespace.name="*****" "Error" OR "Exception" | rex field=_raw "(?<error_msg>Required Parameters Longitude and Latitude are missing or invalid)" | stats count by error_msg | sort count desc   Any help will be great
Hello everyone, I have the following field and example value: sourcePort=514.000 I'd like to format these fields in such a way, that only the first digits until the point are kept. Furthermore, t... See more...
Hello everyone, I have the following field and example value: sourcePort=514.000 I'd like to format these fields in such a way, that only the first digits until the point are kept. Furthermore, this should only apply to a certain group of events (group one).  Basically:  before: sourcePort=514.000 after:    sourcePort=514 What I have until now: search... | eval sourcePort=if(group=one, regex part, sourcePort) The regex to match only the digits is  ^\d{1,5} However, I am unsure how to work with the regex and if it is even possible to achieve my goal using this. Thanks in advance
Hi there,   I am trying to ingest data which is stored within the profile of a user's AddData location: C:\Users\(User ID)\AppData\Local\UiPath\Logs but can't pull in any events. I've tried lots o... See more...
Hi there,   I am trying to ingest data which is stored within the profile of a user's AddData location: C:\Users\(User ID)\AppData\Local\UiPath\Logs but can't pull in any events. I've tried lots of different stanzas like  [monitor://C:\Users\DKX*\AppData\ [monitor://C:\Users\DKX$\AppData\ [monitor://C:\Users\...\AppData\ [monitor://C:\Users\%userprofile%\AppData\ Any idea why it isn't working? I know i've not added in all my stanza attempts but could it be due to the Splunk service account not having access to that location?
Hello everyone,   I am passing the dates as token but it shows the error in both the condition. Cond1: | where (Date>="$date_start$" AND Date<="$date_end$") Cond2: | where (Date>="2022-06-01"... See more...
Hello everyone,   I am passing the dates as token but it shows the error in both the condition. Cond1: | where (Date>="$date_start$" AND Date<="$date_end$") Cond2: | where (Date>="2022-06-01" AND Date<="2022-06-02") Please help
Hi, I've been told, that using field extractions on json is not best practis and that I should use calculated fields instead. In some cases thats easy and I can use replace or other methods to do tha... See more...
Hi, I've been told, that using field extractions on json is not best practis and that I should use calculated fields instead. In some cases thats easy and I can use replace or other methods to do that but in some it is more difficult.  I have some events giving me information about software versions. When I try to extract the version string from as follows, I get the results for events containing this string. In all other cases I get the complete string instead. What I need is the matching string or nothing. I couldn't figure out how to do that. replace(message, "^My Software Version (\S+).*", "\1")  
  I try use macros to get external indexes in child dataset VPN, but search with tstats on this dataset doesn't work.  Example of search:     | tstats values(sourcetype) as sourcetyp... See more...
  I try use macros to get external indexes in child dataset VPN, but search with tstats on this dataset doesn't work.  Example of search:     | tstats values(sourcetype) as sourcetype from datamodel=authentication.authentication where nodename=authentication.VPN by nodename       But when I explicitly enumerate the indexes, then everything works! And also it work with macros when i use search:     | from datamodel ...     What's problem? 
Hi, I am trying to create the metric from ADQL searches. However, when I create the metric, it keeps fluctuating between one and zero. However, when I see the same thing in ADQL searches I get some ... See more...
Hi, I am trying to create the metric from ADQL searches. However, when I create the metric, it keeps fluctuating between one and zero. However, when I see the same thing in ADQL searches I get some value (5 in this case as you can see in the ADQL query screenshot) Analytics ADQL query Screenshot: Analytics metric screenshot: Am I going wrong somewhere?  Thanking you in advance
I have edited edited Input.conf file as below. [Bamboo://localhost:8085] sourcetype = bamboo interval = 60 server = http://localhost:8085 protocol = https port = 8085 username = bamboo_user ... See more...
I have edited edited Input.conf file as below. [Bamboo://localhost:8085] sourcetype = bamboo interval = 60 server = http://localhost:8085 protocol = https port = 8085 username = bamboo_user password = bamboo_pwd disabled = 0 Is above file is correct? Based on this HTTP event collector will generate the token in Splunk web enterprise right?      
hello, My need is to use Splunk Entreprise to serve multiple client organizations using a single instance=> Multitenancy function use. I have some installed Splunk Apps using only one index and t... See more...
hello, My need is to use Splunk Entreprise to serve multiple client organizations using a single instance=> Multitenancy function use. I have some installed Splunk Apps using only one index and they manage the data coming from multiple clients, how can I separate them on the dashboard ?  how can i create Role-based permissions per customer ? Does Splunk Entreprise  supports natively Multitenancy function ? how can I achieve my goal ? Bests, Yassine.  
Hello. I have three lists of names of different technologies, I would like to put the technologies in a menu or multiselect so that when I select each technology it brings me the names of each one ... See more...
Hello. I have three lists of names of different technologies, I would like to put the technologies in a menu or multiselect so that when I select each technology it brings me the names of each one that is selected, for example:   My list is:   My multiselect input is:   When selecting each option, I would like it to show me all the users like the following table: try doing the following | makeresults |eval input = "$ms_Be1Voild$" //This is the token of my multiselect input |eval array = mvjoin(input, ",") |fields array But the result is the following: Active Directory,o365,Windows Could anyone help me please.    
Hi, i'm currently working on a props.conf and have different values from _time and the timestamp in my logs. What did i wrong? Thanks in advance. 2023-01-24T13:00:23+00:00 avx.local0.notice {"hos... See more...
Hi, i'm currently working on a props.conf and have different values from _time and the timestamp in my logs. What did i wrong? Thanks in advance. 2023-01-24T13:00:23+00:00 avx.local0.notice {"host":"xx-xx-xxxxx-xxxx-xxxxx-x-xx-000x-xxxxx-xxxx-xx.xxx.xxx.xxx","ident":"syslog","message":"xx:xx.xxxxxx+xx:xx xx-xx-xxxxxx-xxxx-xxxxxxx-x-xx-xxxx-xxxxx-hagw-xx.xxxx.xxx.xxxx From Splunk search the values are the following: timestamp: 2023-01-24T13:00:19.141113233, _time: 2023-01-24T14:00:23.000+01:00 My props.conf is the following: [s3:Test] TIME_FORMAT = %Y-%m-%dT%H:%M:%S%z TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 26 TRUNCATE = 10000 SHOULD_LINEMERGE = false
The internal logs flow to splunk UI but the applications logs are not flowing to splunk UI. We have a cluster with several different components. We are facing the above issue with only one of the ... See more...
The internal logs flow to splunk UI but the applications logs are not flowing to splunk UI. We have a cluster with several different components. We are facing the above issue with only one of the component, although, the splunk configuration for all the components are same except the host differs.
Hello Community! I'm searching for a solution to highlight the "HostC", which has an AppC failure and no further log entry, that AppC is started again. How can I do this, regardless on which host ... See more...
Hello Community! I'm searching for a solution to highlight the "HostC", which has an AppC failure and no further log entry, that AppC is started again. How can I do this, regardless on which host this happens? I saw a comment, to create events for "app failure" and "app started" and then make a transaction, but is there an other way too? SampleData: 1.1.1970 08:00 HostA - 1.1.1970 08:00 HostB AppB=failure 1.1.1970 08:00 HostC AppC=failure 1.1.1970 09:00 HostA AppA=started 1.1.1970 09:00 HostB AppB=started 1.1.1970 09:00 HostC - Thanks for your help Rob
We're trying to integrate splunk APM to a python based app but the service doesn't appear in the APM list.  The integration works locally, but not in the same service deployed in our Kubernetes... See more...
We're trying to integrate splunk APM to a python based app but the service doesn't appear in the APM list.  The integration works locally, but not in the same service deployed in our Kubernetes cluster We have added the following env variables into the deployment manifest of the application: - name: SPLUNK_OTEL_AGENT    valueFrom:         fieldRef:              fieldPath: status.hostIP - name: OTEL_EXPORTER_OTLP_ENDPOINT    value: "http://$(SPLUNK_OTEL_AGENT):4317" - name: OTEL_SERVICE_NAME    value: "my-app" - name: OTEL_RESOURCE_ATTRIBUTES    value: "deployment.environment=development" We also tried to add these additional variables to try to send the data directly, but it still didn't work - name: SPLUNK_ACCESS_TOKEN    value: "******" - name: OTEL_TRACES_EXPORTER    value: "jaeger-thrift-splunk" - name: OTEL_EXPORTER_JAEGER_ENDPOINT    value: "https://ingest.us1.signalfx.com/v2/trace"
A question, When we talk about correlation, is it necessarily because a query is being made in 2 or more sources? Or is it also considered correlation when certain criteria are searched in a sour... See more...
A question, When we talk about correlation, is it necessarily because a query is being made in 2 or more sources? Or is it also considered correlation when certain criteria are searched in a source to try to find a possible event or security incident? For you what is correlation in Splunk?