All Topics

Top

All Topics

Hi, first question here ! I'm new on Splunk and I have a basic question on btool. With this command line :    /splunk btool outputs list --debug   the result is that the first element... See more...
Hi, first question here ! I'm new on Splunk and I have a basic question on btool. With this command line :    /splunk btool outputs list --debug   the result is that the first element in the (long) list is the one which is applied in case if there is no outputs.conf in a deployed app on the Heavy Forwarder ? Am I right ? Thanks Nico
Hello I have great difficulties to understand where to begin for using the CIM datamodel Is anybody can clearly summarize the different ways to apply a CIM datamodel in my own apps? Thanks in adva... See more...
Hello I have great difficulties to understand where to begin for using the CIM datamodel Is anybody can clearly summarize the different ways to apply a CIM datamodel in my own apps? Thanks in advance
Can someone help me understand the totalResultCount function? I have looked at the documentation and spent an hour or two fiddling with it, but I can't figure out what it is supposed to do.
need to install the splunk enterprise and wanted to make SH and indexer , universal forwarder  same system , please advise
Hi! We recently decided to move from Splunk on-prem to Cloud.  Is there any quick way for me to upload my savedsearches.conf file from the On-Prem to the Cloud instance?   I am looking for a way whe... See more...
Hi! We recently decided to move from Splunk on-prem to Cloud.  Is there any quick way for me to upload my savedsearches.conf file from the On-Prem to the Cloud instance?   I am looking for a way where I dont have to manually copy my saved searches.  Thanks!  
I have Solarwinds add-on installed on Linux HF. I am seesin this error: +0000 log_level=WARNING, pid=28286, tid=Thread-4, file=ext.py, func_name=time_str2str, code_line_no=321 | [stanza_name="Solarw... See more...
I have Solarwinds add-on installed on Linux HF. I am seesin this error: +0000 log_level=WARNING, pid=28286, tid=Thread-4, file=ext.py, func_name=time_str2str, code_line_no=321 | [stanza_name="SolarwindAlerts"] Unable to convert date_string "2024-02-15T13:44:46.6370000" from format "%Y-%m-%dT%H:%M:%S.%f" to "%Y-%m-%dT%H:%M:%S.%f", return the original date_string, cause=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/ext.py", line 304, in time_str2str     dt = datetime.strptime(date_string, from_format)   File "/opt/splunk/lib/python3.7/_strptime.py", line 577, in _strptime_datetime     tt, fraction, gmtoff_fraction = _strptime(data_string, format)   File "/opt/splunk/lib/python3.7/_strptime.py", line 362, in _strptime     data_string[found.end():]) ValueError: unconverted data remains: 0   Can someone help. I have no data from solarwinds. I tried reinstalling the add on and reconfiguring it. It was working till 8.* version of HF now we have upgraded to 9.1.3. Its showing supported in splunkbase. 
I am using the search below | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen   I would like to add a field popula... See more...
I am using the search below | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen   I would like to add a field populated by somename that ends in "srx"   Jan 4 13:07:57 1.1.1.1 1 2024-01-04T13:07:57.085-05:00 5995-somename-srx rpd 2188 JTASK_SIGNAL_INFO [junos@2636.1.1.1.2.133 message-name="INFO Signal Info: Signal Number = " signal-number="1" name=" Consumed Count = " data-1="3"]
Hello  I have to work on a parser which has the time format like this : "time: 2024-02-15T11:40:19.843185438Z" It is json data so I have created a logic like below to extract the time. TIME_PREFIX... See more...
Hello  I have to work on a parser which has the time format like this : "time: 2024-02-15T11:40:19.843185438Z" It is json data so I have created a logic like below to extract the time. TIME_PREFIX = \"time\"\:\s*\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%9Q%Z Although, I see no errors while uploading the test data, in the time field I can see values upto 3 milliseconds only, for eg : 2/15/24 11:40:19.843 AM Is this the right way or Splunk does show the nanoseconds values too? If it does, what is it that is missing in my logic to view the same? Kindly help.   Regards.  
Can I injest CPU, memory,eventID data in metric index by using SPLUNK app for Windows ? I am getting data once I injest this data in event index but when I am changing the index to metric index the ... See more...
Can I injest CPU, memory,eventID data in metric index by using SPLUNK app for Windows ? I am getting data once I injest this data in event index but when I am changing the index to metric index the data stops coming to any index. #splunkforwarder#splunkappforwindows
Looking for some advice please! I have pushed Splunk UF via MS Intune to all domain laptops. All looks well with config file and settings for reporting server and ports set. On an example machine, ... See more...
Looking for some advice please! I have pushed Splunk UF via MS Intune to all domain laptops. All looks well with config file and settings for reporting server and ports set. On an example machine, go to services, SplunkForwarder is running. These logs are meant to be pushed to our CyberDefence 3rd party.  However, it seems Splunk has no rights to send logs (possibly due to 'Log on' as settings in SplunkForwarder service). Has anyone ever encountered this before and resolved, or completed Spunk UF install via Intune?
Hey Experts, I'm new to splunk and I'm trying to create a new lookup from data in a index=abc. Can someone please guide me on how to achieve this? Any help or example queries would be greatly appreci... See more...
Hey Experts, I'm new to splunk and I'm trying to create a new lookup from data in a index=abc. Can someone please guide me on how to achieve this? Any help or example queries would be greatly appreciated. Thank You!
Hello @ITWhisperer , i am trying to get the details of "the volume of data ingestion, broken down by index group" i tried this SPL unable to get the results in the table index=summary source="sp... See more...
Hello @ITWhisperer , i am trying to get the details of "the volume of data ingestion, broken down by index group" i tried this SPL unable to get the results in the table index=summary source="splunk-ingestion" |dedup keepempty=t _time idx |stats sum(ingestion_gb) as ingestion_gb by _time idx |bin _time span=1h |eval ingestion_gb=round(ingestion_gb,3) |eval group_field=if(searchmatch("idx=.*micro.*group1"), "group1",searchmatch("idx=.*soft.*"), "group2", true(), "other") |timechart limit=0 span=1d sum(ingestion_gb) as GB by group_field We are having list of indexes like: AZ_micro micro AD_micro Az_soft soft AZ_soft From the above indexes 'micro' are grouped under the name 'microgroup', while the indexes 'soft' are grouped under 'softgroup', and so on like below. so, in the table i want to show the volume of the "groups" like ------------------------------------------ group name         |               volume ------------------------------------------ microgroup         |              <0000> softgroup             |              <0000>
  index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | where Average_Request_Time >1   I have this query setup as an alert if my web app request d... See more...
  index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | where Average_Request_Time >1   I have this query setup as an alert if my web app request duration goes over 1 second and this searches back over a 30 min window. I want to know when this alert has recovered.  So I guess effectively running this query twice against 1st 30 mins of an hour then 2nd 30 mins of an hour then give me a result I can alert when that gets returned.  The result would be an indication that the 1st 30 mins was over 1 second average duration and the 2nd 30 mins was under 1 second average duration and thus, it recovered. I have no idea where to start with this!  But I do want to keep the alert query above for my main alert of an issue and have a 2nd alert query for this recovery element.  Hoep this is possible.
Dears, I'm trying to install Event Service 23.7 with Elastic Search 8  the scenario: 1- adding host by user root => and this is only applicable by the root user as I tried to make it by another us... See more...
Dears, I'm trying to install Event Service 23.7 with Elastic Search 8  the scenario: 1- adding host by user root => and this is only applicable by the root user as I tried to make it by another user and go with SSH passwordless but it didn't work. ##if you advise adding hosts by different user okay I will share the error then for this.  2- after adding host by root, I go to events service            a- The default installation directory is /opt/appdynamics/platform/product/events-service/data and I change                       it to be /home/appd-user/appdynamics-event-service-path/ in this way I get error                    #Unable to check the health of Events Service hosts [events-03, events-02, events-01] through port 9080                       or it runs and then stops in seconds.                   #and in logs elastic search stopped => cause running by root user                   #my question here is also, why does the installation of events service in each service go under                                                     /opt/appdynamics to solve this issue I have added the hosts with appd user but in each event service server  sudo chown -R appd:appd /opt/ sudo chmod -R u+w /opt/ the adding host went well and the installation of the event service went well but some of the event services stopped with an error elastic search running with the root user even it's run with appd user. and to solve this issue I have run manually from each server  bin/events-service.sh stop -f && rm -r events-service-api-store.id && rm -r elasticsearch.id nohup bin/events-service.sh start -p conf/events-service-api-store.properties & but anyway, I see all of this is work around and I want something more stable and I'm sure someone will guide me with the right steps.  BR Abdulrahman Kazamel
I am running query ->  index=* source="/somesource/*" message "403" | search level IN (ERROR) And Response is --> { "instant": { "epochSecond": 1707978481, "nanoOfSecond": 72000000 }, "threa... See more...
I am running query ->  index=* source="/somesource/*" message "403" | search level IN (ERROR) And Response is --> { "instant": { "epochSecond": 1707978481, "nanoOfSecond": 72000000 }, "thread": "main", "level": "ERROR", "message": "Error while creating user group", "thrown": { "commonElementCount": 0, "extendedStackTrace": "403 Forbidden:" }, "endOfBatch": false, "threadId": 1, "threadPriority": 5, "timestamp": "2024-02-15T06:28:01.072+0000" } Now, when i ran following query -> index=* source="/somesource/*" message "403" | search level IN (ERROR) | eval Test=substr(message,1,5) | eval Test1=substr(thrown.extendedStackTrace, 1, 3) | table Test, Test1 I am getting value for Test. Correct substring occuring (Output is Error). But for Test1, its empty string, where as I am expecting 403. As message is on root, its working, but the extendedStackTrace is under thrown, the thrown.extendedStackTrace is not rending the correct result. Although, if i do ...| table Test, Test1, thrown.extendedStackTrace There is a proper value coming in for thrown.extendedStackTrace What am i missing?
Hello Team, Required help regarding below points : 1] how to add entry of  the ran search with the fields Host, SourceIP and DestinationIP into lookup table. 2] how to add entry into lookup tabl... See more...
Hello Team, Required help regarding below points : 1] how to add entry of  the ran search with the fields Host, SourceIP and DestinationIP into lookup table. 2] how to add entry into lookup table from the notable triggered or contributing events of the notable.   Requirement here is that need to create co relation rule from the lookup values which will be taking from previously triggered notables.
Search Head appears to have a rogue python  process ( appserver.py) that slowly eats away all memory on the system, then eventually causes an OOM, which requires a manual restart of splunkd, then the... See more...
Search Head appears to have a rogue python  process ( appserver.py) that slowly eats away all memory on the system, then eventually causes an OOM, which requires a manual restart of splunkd, then the issue starts slowly creeping up to happen again.
Navigating the complexities of modern retail and tracking them with SAP and Cisco AppDynamics Video Length: 2 min 38 seconds  CONTENTS | Introduction | Video |Resources | About the presenter  ... See more...
Navigating the complexities of modern retail and tracking them with SAP and Cisco AppDynamics Video Length: 2 min 38 seconds  CONTENTS | Introduction | Video |Resources | About the presenter  In this Cisco AppDynamics video, Matt Schuetze discusses the challenges retailers face during peak sales periods, along with shifting consumer behaviors. Their SAP systems must manage a wide gap between the highest stresses during peak times and the regular loads during off-peak seasons, exposing retailers to poor customer experience outcomes.   He expands on how using AppDynamics for SAP to monitor, manage, and respond to these challenges in real time helps retailers protect their customers’ experience and support their ongoing pursuit of a competitive edge.    Additional Resources  AppDynamics Monitoring for SAP® Solutions: Build resiliency into your SAP landscape  Explore SAP Monitoring with AppDynamics in the documentation  About presenter Matt Schuetze Matt Schuetze Field Architect Matt Schuetze is a Field Architect at Cisco on the AppDynamics product. He confers with customers and engineers to assess application tooling choices and helps clients resolve application performance problems. Matt runs the Detroit Java User Group and the AppDynamics Great Lakes User Group. His career includes 10+ years of speaking periodically at user groups and industry trade shows. He has a Master’s degree in Nuclear Engineering from MIT and a Bachelor’s degree in Engineering Physics from the University of Michigan.
Tech Talk | Security Edition Did you know the Splunk Threat Research Team regularly releases new, pre-packaged security content? Just in the last few months, the team has released dozens of new and... See more...
Tech Talk | Security Edition Did you know the Splunk Threat Research Team regularly releases new, pre-packaged security content? Just in the last few months, the team has released dozens of new and updated detections and analytics stories covering the latest threats, including malware campaigns, zero-day vulnerabilities, CVEs, and more. Join this Tech Talk to learn more from Michael Haag, Principal Threat Researcher, who will provide: Best practices for accessing and using the team’s content in the Splunk ES Content Update (ESCU) app An overview of the team’s content updates between November and January Deeper dives into new content for detecting DarkGate malware, Office 365 account takeover, and Windows Attack Surface Reduction events Event Page
Hello All, I have the below SPL to compare hourly event data and indexed data to find if they follow similar pattern and if there big gap. |tstats count where index=xxx sourcetype=yyy BY _indextime... See more...
Hello All, I have the below SPL to compare hourly event data and indexed data to find if they follow similar pattern and if there big gap. |tstats count where index=xxx sourcetype=yyy BY _indextime _time span=1h |bin span=1h _indextime as itime |bin span=1h _time as etime |eventstats sum(count) AS indexCount BY itime |eventstats sum(count) AS eventCount BY etime |timechart span=1h max(eventCount) AS event_count max(indexCount) AS index_count However, when I compare hourly results,  I get more number of data points in indexed data than event data. Thus, can you please guide to resolve the problem. Thank you Taruchit