All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am using the search below | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen   I would like to add a field popula... See more...
I am using the search below | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen   I would like to add a field populated by somename that ends in "srx"   Jan 4 13:07:57 1.1.1.1 1 2024-01-04T13:07:57.085-05:00 5995-somename-srx rpd 2188 JTASK_SIGNAL_INFO [junos@2636.1.1.1.2.133 message-name="INFO Signal Info: Signal Number = " signal-number="1" name=" Consumed Count = " data-1="3"]
Hello  I have to work on a parser which has the time format like this : "time: 2024-02-15T11:40:19.843185438Z" It is json data so I have created a logic like below to extract the time. TIME_PREFIX... See more...
Hello  I have to work on a parser which has the time format like this : "time: 2024-02-15T11:40:19.843185438Z" It is json data so I have created a logic like below to extract the time. TIME_PREFIX = \"time\"\:\s*\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%9Q%Z Although, I see no errors while uploading the test data, in the time field I can see values upto 3 milliseconds only, for eg : 2/15/24 11:40:19.843 AM Is this the right way or Splunk does show the nanoseconds values too? If it does, what is it that is missing in my logic to view the same? Kindly help.   Regards.  
Can I injest CPU, memory,eventID data in metric index by using SPLUNK app for Windows ? I am getting data once I injest this data in event index but when I am changing the index to metric index the ... See more...
Can I injest CPU, memory,eventID data in metric index by using SPLUNK app for Windows ? I am getting data once I injest this data in event index but when I am changing the index to metric index the data stops coming to any index. #splunkforwarder#splunkappforwindows
Looking for some advice please! I have pushed Splunk UF via MS Intune to all domain laptops. All looks well with config file and settings for reporting server and ports set. On an example machine, ... See more...
Looking for some advice please! I have pushed Splunk UF via MS Intune to all domain laptops. All looks well with config file and settings for reporting server and ports set. On an example machine, go to services, SplunkForwarder is running. These logs are meant to be pushed to our CyberDefence 3rd party.  However, it seems Splunk has no rights to send logs (possibly due to 'Log on' as settings in SplunkForwarder service). Has anyone ever encountered this before and resolved, or completed Spunk UF install via Intune?
Hey Experts, I'm new to splunk and I'm trying to create a new lookup from data in a index=abc. Can someone please guide me on how to achieve this? Any help or example queries would be greatly appreci... See more...
Hey Experts, I'm new to splunk and I'm trying to create a new lookup from data in a index=abc. Can someone please guide me on how to achieve this? Any help or example queries would be greatly appreciated. Thank You!
Hello @ITWhisperer , i am trying to get the details of "the volume of data ingestion, broken down by index group" i tried this SPL unable to get the results in the table index=summary source="sp... See more...
Hello @ITWhisperer , i am trying to get the details of "the volume of data ingestion, broken down by index group" i tried this SPL unable to get the results in the table index=summary source="splunk-ingestion" |dedup keepempty=t _time idx |stats sum(ingestion_gb) as ingestion_gb by _time idx |bin _time span=1h |eval ingestion_gb=round(ingestion_gb,3) |eval group_field=if(searchmatch("idx=.*micro.*group1"), "group1",searchmatch("idx=.*soft.*"), "group2", true(), "other") |timechart limit=0 span=1d sum(ingestion_gb) as GB by group_field We are having list of indexes like: AZ_micro micro AD_micro Az_soft soft AZ_soft From the above indexes 'micro' are grouped under the name 'microgroup', while the indexes 'soft' are grouped under 'softgroup', and so on like below. so, in the table i want to show the volume of the "groups" like ------------------------------------------ group name         |               volume ------------------------------------------ microgroup         |              <0000> softgroup             |              <0000>
  index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | where Average_Request_Time >1   I have this query setup as an alert if my web app request d... See more...
  index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | where Average_Request_Time >1   I have this query setup as an alert if my web app request duration goes over 1 second and this searches back over a 30 min window. I want to know when this alert has recovered.  So I guess effectively running this query twice against 1st 30 mins of an hour then 2nd 30 mins of an hour then give me a result I can alert when that gets returned.  The result would be an indication that the 1st 30 mins was over 1 second average duration and the 2nd 30 mins was under 1 second average duration and thus, it recovered. I have no idea where to start with this!  But I do want to keep the alert query above for my main alert of an issue and have a 2nd alert query for this recovery element.  Hoep this is possible.
Dears, I'm trying to install Event Service 23.7 with Elastic Search 8  the scenario: 1- adding host by user root => and this is only applicable by the root user as I tried to make it by another us... See more...
Dears, I'm trying to install Event Service 23.7 with Elastic Search 8  the scenario: 1- adding host by user root => and this is only applicable by the root user as I tried to make it by another user and go with SSH passwordless but it didn't work. ##if you advise adding hosts by different user okay I will share the error then for this.  2- after adding host by root, I go to events service            a- The default installation directory is /opt/appdynamics/platform/product/events-service/data and I change                       it to be /home/appd-user/appdynamics-event-service-path/ in this way I get error                    #Unable to check the health of Events Service hosts [events-03, events-02, events-01] through port 9080                       or it runs and then stops in seconds.                   #and in logs elastic search stopped => cause running by root user                   #my question here is also, why does the installation of events service in each service go under                                                     /opt/appdynamics to solve this issue I have added the hosts with appd user but in each event service server  sudo chown -R appd:appd /opt/ sudo chmod -R u+w /opt/ the adding host went well and the installation of the event service went well but some of the event services stopped with an error elastic search running with the root user even it's run with appd user. and to solve this issue I have run manually from each server  bin/events-service.sh stop -f && rm -r events-service-api-store.id && rm -r elasticsearch.id nohup bin/events-service.sh start -p conf/events-service-api-store.properties & but anyway, I see all of this is work around and I want something more stable and I'm sure someone will guide me with the right steps.  BR Abdulrahman Kazamel
I am running query ->  index=* source="/somesource/*" message "403" | search level IN (ERROR) And Response is --> { "instant": { "epochSecond": 1707978481, "nanoOfSecond": 72000000 }, "threa... See more...
I am running query ->  index=* source="/somesource/*" message "403" | search level IN (ERROR) And Response is --> { "instant": { "epochSecond": 1707978481, "nanoOfSecond": 72000000 }, "thread": "main", "level": "ERROR", "message": "Error while creating user group", "thrown": { "commonElementCount": 0, "extendedStackTrace": "403 Forbidden:" }, "endOfBatch": false, "threadId": 1, "threadPriority": 5, "timestamp": "2024-02-15T06:28:01.072+0000" } Now, when i ran following query -> index=* source="/somesource/*" message "403" | search level IN (ERROR) | eval Test=substr(message,1,5) | eval Test1=substr(thrown.extendedStackTrace, 1, 3) | table Test, Test1 I am getting value for Test. Correct substring occuring (Output is Error). But for Test1, its empty string, where as I am expecting 403. As message is on root, its working, but the extendedStackTrace is under thrown, the thrown.extendedStackTrace is not rending the correct result. Although, if i do ...| table Test, Test1, thrown.extendedStackTrace There is a proper value coming in for thrown.extendedStackTrace What am i missing?
Hello Team, Required help regarding below points : 1] how to add entry of  the ran search with the fields Host, SourceIP and DestinationIP into lookup table. 2] how to add entry into lookup tabl... See more...
Hello Team, Required help regarding below points : 1] how to add entry of  the ran search with the fields Host, SourceIP and DestinationIP into lookup table. 2] how to add entry into lookup table from the notable triggered or contributing events of the notable.   Requirement here is that need to create co relation rule from the lookup values which will be taking from previously triggered notables.
Search Head appears to have a rogue python  process ( appserver.py) that slowly eats away all memory on the system, then eventually causes an OOM, which requires a manual restart of splunkd, then the... See more...
Search Head appears to have a rogue python  process ( appserver.py) that slowly eats away all memory on the system, then eventually causes an OOM, which requires a manual restart of splunkd, then the issue starts slowly creeping up to happen again.
Hello All, I have the below SPL to compare hourly event data and indexed data to find if they follow similar pattern and if there big gap. |tstats count where index=xxx sourcetype=yyy BY _indextime... See more...
Hello All, I have the below SPL to compare hourly event data and indexed data to find if they follow similar pattern and if there big gap. |tstats count where index=xxx sourcetype=yyy BY _indextime _time span=1h |bin span=1h _indextime as itime |bin span=1h _time as etime |eventstats sum(count) AS indexCount BY itime |eventstats sum(count) AS eventCount BY etime |timechart span=1h max(eventCount) AS event_count max(indexCount) AS index_count However, when I compare hourly results,  I get more number of data points in indexed data than event data. Thus, can you please guide to resolve the problem. Thank you Taruchit
My forwarder refuses to connect to the manager over 8089.  firewall is allowing traffic set deploy-poll is working and yet I cannot see the connection even be attempted via netstat on the splunk un... See more...
My forwarder refuses to connect to the manager over 8089.  firewall is allowing traffic set deploy-poll is working and yet I cannot see the connection even be attempted via netstat on the splunk universal forwarder (nix) UF ---> HF   here is my deploymentclient.conf [deployment-client] [target-broker:deploymentServer] #this was part of default after command was run deploymentServer=x.x.x.x:8089 targetUri = 10.1.10.69:8089  #this was part of default after command was run
I need to list which data sources have datamodels, I tried a few ways but none of them were effective, can you help me please. Best regards Valderlúcio.
I'm new to REX and trying to extract strings from _raw (which is actually a malformed JSON, so SPATH is not a good option either). I was able to create a REX to identify the pattern that I want (o... See more...
I'm new to REX and trying to extract strings from _raw (which is actually a malformed JSON, so SPATH is not a good option either). I was able to create a REX to identify the pattern that I want (or kind of). However, I'm having trouble establishing the correct boundaries. There is where my lack of experience with REX is showing. I cannot establish the end of my pattern correctly. I have pasted the expression that I'm using and a cleaned-up sample of the text I'm dealing with. | rex field=_raw "next\_best\_thing.+description(?<NBT>.+)topic" I thought this would identify the beginning of my pattern as next_best_thing (as it does) and the end after the first description and capture the Group (NBT) as \\\":\\\"Another quick brown fox jumps over the lazy dog.\\\"},{\\\" (just before the first topic). Naturally, a lot of clean-up would still be necessary but I would have something to work with. However, it seems that the search starts from the end of the _raw string, so the description that is being captured is in a different part and the Group becomes something completely different from what I intended to (\\\":\\\"A third quick brown fox jumps over the lazy dog\xAE Bla Bla BlaBla?\xA0 And a forth The quick brown fox jumps over the lazy dog.\\\"},{\\\"). Also, if the expression is just | rex field=_raw "next\_best\_thing.+description(?<NBT>.+)", omitting the end boundary (TOPIC), the whole pattern changes, with completely different description being used as the end boundary. And naturally the Group changes completely. The latter reinforces the impressions that the searches are being performed from the end of _raw. Is there a way to change the search direction? Or am I even more wrong / lost than I think on how to establish the boundaries for pattern and group? "BlaBla_BlaBla_condition\\\":\\\"\\\",\\\"OtherBla\\\":{\\\"description\\\":\\\"The quick brown fox jumps over the lazy dog\\\",\\\"next_best_thing\\\":[{\\\"topic\\\":\\\"Target Public\\\",\\\"description\\\":\\\"Another quick brown fox jumps over the lazy dog.\\\"},{\\\"topic\\\":\\\"Benefit to Someone\\\",\\\"description\\\":\\\"A third quick brown fox jumps over the lazy dog\xAE Bla Bla BlaBla?\xA0 And a forth The quick brown fox jumps over the lazy dog.\\\"},{\\\"topic\\\":\\\"Call to Something\\\",\\\"description\\\":\\\"The fith quick brown fox jumps over the lazy dog.\\\"}]}},\\\"componentTemplate\\\":{\\\"id\\\":\\\"tcm:999-111111-99\\\",\\\"title\\\":\\\"BlaBlaBla_Bla_Bla\\\"},\\\"ia_rendered\\\":\\\"data-slot-id=\\\\\\\"BlaBlaBla\\\\\\\" lang=\\\\\\\"en\\\\\\\" data-offer-id=\\\\\\\"BLABLABLABLABLABLA\\\\\\\" \\\"}\",\"Rank\":\"1\"},\"categoryName\":\"\",\"source\":\"BLA\",\"name\":\"OTHETHINGSHERE_\",\"type\":null,\"placementName\":\"tvprimary\",\"presentationOrderWitinSlot\":1,\"productDetails\":{\"computerApplicationCode\":null,\"productCode\":\"BLA\",\"productSubCode\":\"\"},\"locationProductCode\":null,\"locationProductSubCode\":null,\"priorityWithInProductAndSubCode\":null}],\"error\":null},\"custSessionAvailable\":false},\"ecprFailed\":false,\"svtException\":null}"
HIi @ITWhisperer  index=foo sourcetype=json_foo source="az-foo" |rename tags.envi as env |search env="*A00001*" OR env="*A00002*" OR env="*A00005*" OR env="*A00020*" |table env from the fields... See more...
HIi @ITWhisperer  index=foo sourcetype=json_foo source="az-foo" |rename tags.envi as env |search env="*A00001*" OR env="*A00002*" OR env="*A00005*" OR env="*A00020*" |table env from the fields i am using: env="*A00001*" as "PBC" env="*A00002*" as "PBC" env="*A00005*" as "KCG env="*A00020*" as "TTK" reference:   From this SPL, i am trying to create a table like ------------------------------------------------------ PBC           |            KCG           |           TTK ------------------------------------------------------- all values       all values                 all values count                count                       count  
Hello all, I have a problem with my configuration smtp. When I send e-mail I get this error : 2024-02-14 16:44:15,213 +0100 ERROR cli_common:482 - Failed to decrypt value: ********************... See more...
Hello all, I have a problem with my configuration smtp. When I send e-mail I get this error : 2024-02-14 16:44:15,213 +0100 ERROR cli_common:482 - Failed to decrypt value: ***************************=, error: Read custom key data size=30 Someone has an idea?
Hi, I had an add-on built using add-on builder  last year and it was working. In January I rebuilt it using the latest version of Add-on builder and it started failing with  CERTIFICATE_VERIFY_FAIL... See more...
Hi, I had an add-on built using add-on builder  last year and it was working. In January I rebuilt it using the latest version of Add-on builder and it started failing with  CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate I did not made any change on our add-on other than adding some extra logs. Does anyone know what changed in Add-On builder latest 4.1.4 version that it started failing? I will appreciate any help in troubleshooting this issue
Hello, Our application is not working anymore after upgrading from 9.0.7 to 9.1.2. We have a dashboard made in html and we were including it in a simplexml dashboard. It's not working because in 9.... See more...
Hello, Our application is not working anymore after upgrading from 9.0.7 to 9.1.2. We have a dashboard made in html and we were including it in a simplexml dashboard. It's not working because in 9.1.2 jquery libraries older than 3.5 are not supported anymore. Is there a workaround for this matter except rewriting the application in dashboard studio? It's a complex  application and we have multiple dashboards like this one.  <view template="app:/templates/TUBE-MAP.html"> <label>App name</label> </view>
Hi Team,    Currently we are installed the Splunk DB app in Heavy Forwarder, How to Connect this app from Heavy Forwarder to Splunk Cloud Search Head+Indexer server ?