All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I am trying to create alerts and dashboards for my o365 and AD logs.  Is there somewhere that has an overview of the different options in for example Operations? Since I dont have a log from w... See more...
Hi I am trying to create alerts and dashboards for my o365 and AD logs.  Is there somewhere that has an overview of the different options in for example Operations? Since I dont have a log from when a user is created, I dont know the value the log will say eg, UserCreated, UserWasCreated, CreateUser. Hope it makes sense  
Hi, I hope that asking this question will not cause controversy. I currently manage a hybrid between Splunk and ELK, some of the sources come directly to Splunk where we pay for the licensing but a... See more...
Hi, I hope that asking this question will not cause controversy. I currently manage a hybrid between Splunk and ELK, some of the sources come directly to Splunk where we pay for the licensing but as there are sources that send very large volumes of information (As we know Splunk is very good but very expensive), we send them to ELK and what we do is that from Splunk we use this type of queries to display data from ELK | ess eaddr="http://localhost:9200" index=paloalto* tsfield="@timestamp" query="src_ip:198.7.62.204" fields="*" now, making clear that these logs are not arriving directly to Splunk, and what Splunk is doing is an external query to ELK I would like to know if it is possible to correlate two sources, in this case I need to correlate the palalto logs type THREAT with the type TRAFFIC This is what I have tried but it does not work   | ess eaddr="http://localhost:9200" index=paloalto* tsfield="@timestamp" query="Type:TRAFFIC AND Threat_ContentType:end AND Action:allow AND NOT SourceLocation:172.16.0.0-172.31.255.255 AND NOT SourceLocation:192.168.0.0-192.168.255.255 AND NOT SourceLocation:Colombia" fields="GeneratedTime,Threat_ContentType,Action,SourceIP,DestinationIP,DestinationPort,NATDestinationIP,SourceLocation,DestinationLocation,SourceZone,DestinationZone" |table GeneratedTime Threat_ContentType Action SourceIP DestinationIP DestinationPort NATDestinationIP SourceLocation DestinationLocation SourceZone DestinationZone * | append [ ess eaddr="http://localhost:9200" index=paloalto* tsfield="@timestamp" query="type:THREAT" fields="threat" |table threat ] |table threat Action *        
Can I overwrite the data I accumulated in Ver.9.0.2 Enterprise to Ver.7.3.3 Enterprise?
I am testing Splunk Cloud DDSS to AWS S3 buckets currently. I see logs in my S3 bucket once an index gets rolled over to S3 after its "Searchable Retention" period ends. The question I have is the lo... See more...
I am testing Splunk Cloud DDSS to AWS S3 buckets currently. I see logs in my S3 bucket once an index gets rolled over to S3 after its "Searchable Retention" period ends. The question I have is the logs that I see in S3 buckets are compressed using ".zst". Is this a configuration from Splunk or AWS - is there a way to change it to "gzip". Can we not have logs in its default extension and gzip it accordingly.   My next step is to test the restore process and it requires a standalone Splunk Enterprise instance. How should I go about that, one indexer and one search head, assuming it will be for one index only?   Thank you
Hello, I am currently running into this issue where I am unable to store / retrieve any data from my storage/passwords endpoint using the splunk sdk for python.  Here is the message I keep receiv... See more...
Hello, I am currently running into this issue where I am unable to store / retrieve any data from my storage/passwords endpoint using the splunk sdk for python.  Here is the message I keep receiving.    I have yet to have success with this but here is the code below: def _load_secrets(self)     service = client.connect(host="localhost", app="myapp", owner="admin", token=self.sessionKey)     self.secrets = service.storage_passwords   This function gets called in my __init__ function when the object is instantiated.   I am storing the secrets object in a class attribute to be accessible to all functions that would need to interact with this collection.  I have tried this in functions outside of the class I have created and this has failed as well. I have tried changing the owner to "nobody", and I have tried to change the scheme to "http"  and setting the verify arg to False, but this has not helped my issue.    I have passAuth enabled for "splunk-system-user" in my inputs.conf file to allow the use of the session key taken from the standard input ( I am getting the session key without issue).   It appears that I am connecting successfully but when I attempt to access the storage/passwords collection it fails.   Ps: I will be storing an apikey and credentials to retrieve them in here. I successfully store the credentials from my JS function for the setup, my issue is only with python.   Does anyone know how to fix this?   
Hi, We deployed the Splunk Add-On for Unix & Linux on a few AIX & Netezza servers as noticed a few issues with missing metrics. NETEZZA: All df metrics are not being returned (error message in _... See more...
Hi, We deployed the Splunk Add-On for Unix & Linux on a few AIX & Netezza servers as noticed a few issues with missing metrics. NETEZZA: All df metrics are not being returned (error message in _internal index shows the following Splunk_TA_nix/bin/df_metric.sh" df: unrecognized option '--output=source,fstype,size,used,avail,pcent,itotal,iused,iavail,ipcent,target' AIX Following IOstat metrics are not being returned. iostat_metric.rReq_PS iostat_metric.wReq_PS   Thanks, AKN
I decided to make a search with following situation.  However, I would like to enhance the performance that when user wanna search Name, it will only enable index A and B  but not index C Can I a... See more...
I decided to make a search with following situation.  However, I would like to enhance the performance that when user wanna search Name, it will only enable index A and B  but not index C Can I achieve it? Thanks a lot.   |multisearch [ index =A |search Name=* Results =*] [index =B | search Name=* Age=* Results=*] [index =C | search Name=* Age=*]
Hi, I have an unusual scenario for the data I am working with and would like to see if it's even possible to extract data this way. In brief, I parsed a value from my initial search query to a vari... See more...
Hi, I have an unusual scenario for the data I am working with and would like to see if it's even possible to extract data this way. In brief, I parsed a value from my initial search query to a variable using rex and now I want to use only that value as new query instead of sub-query. Workflow: Find all successful test runs for a suite (this is a long query) Find reporting_url via event on each run  Parse uuid from reporting_url (I used rex on raw data and saved it on variable like res_uuid) Search only that uuid as that has multiple test_id records showing count of Pass/Fail counts. (Eventually create a graph for the same) Trying to make a simple example: First query -> Gives test suite level record. Parse to get UUID value Second query -> Independent query using that UUID and then use that for making graph. Please note that 2nd query results not linked with 1st query and sub-search will only give one record.      (Apologies if it's a very common workflow but I was not able to search it easily)  
Hi, I have a list of hosts  that i want to check their status , so  I have created an if statement to filter out the ones that does not meet the if statement , then i have an action to ping on the ... See more...
Hi, I have a list of hosts  that i want to check their status , so  I have created an if statement to filter out the ones that does not meet the if statement , then i have an action to ping on the ones that met the IF statement ex:  host1, host2,host3,host4 if host==host1 OR host == host4  The next action would be  scan ONLY ( host1 , host4) I have the playbook working with all actions but i just could not figure out the way how to only process the hosts that meet the IF condition  Thanks   
I am getting the following errors when trying to Discover Content. I have tried searching online and am not finding any good reasons for this. I also tried downgrading app versions and got the... See more...
I am getting the following errors when trying to Discover Content. I have tried searching online and am not finding any good reasons for this. I also tried downgrading app versions and got the same errors. We just deployed Splunk recently and have some servers reporting in. Splunk itself has no errors or issues that it is reporting.
I cannot find data in field named version in my request. Please help me.See request belong   |mstats min(cpu_metric.pctIdle) as val WHERE `itsi_entity_type_ta_nix_metrics_indexes` AND CPU="all" b... See more...
I cannot find data in field named version in my request. Please help me.See request belong   |mstats min(cpu_metric.pctIdle) as val WHERE `itsi_entity_type_ta_nix_metrics_indexes` AND CPU="all" by host span=1m |eval val=100-val|lookup Serveurs-applications-Document-travail.csv "Nom du serveur" AS host OUTPUTNEW Version  
Hi, I am new to Splunk so please forgive me. I had created a field field, where if the hostname contains "*-us*" then region=NA. In search, I enter the query region=NA and I see thousands of event... See more...
Hi, I am new to Splunk so please forgive me. I had created a field field, where if the hostname contains "*-us*" then region=NA. In search, I enter the query region=NA and I see thousands of events matched, but there are no results in the current time range. I am hoping for assistance with understanding why this is, or if I am going about this the incorrect way. Thank you in advance.
Dear AppD Users, Admins, We are working on a business-oriented custom dashboard and can't find how to put the Availability metric on a dashboard. The Metric should have been viewed as a number - as ... See more...
Dear AppD Users, Admins, We are working on a business-oriented custom dashboard and can't find how to put the Availability metric on a dashboard. The Metric should have been viewed as a number - as you can see on the screenshot. I tried all kinds of options - min, max, current, count, and value. And I can't get the number as you can see on a default dashboard on the screenshot. Maybe some of you had this quest and know how to resolve it.
Hi I'm running REST queries to retrieve containers that need to be reprocessed in function of the values of some of their artifacts values. My approach is querying the artifacts REST endpoint in th... See more...
Hi I'm running REST queries to retrieve containers that need to be reprocessed in function of the values of some of their artifacts values. My approach is querying the artifacts REST endpoint in this way: /rest/artifact/?page_size=3000&_filter_name="my artifact of interest"&_filter_update_time__gt="2023-01-01T00:00:00"&_filter_[othercriteria] The thing is these artifacts are quite heavy and in this particular case I only need their container ID field, so there is no point in retrieving all the other irrelevant fields data.  If I were querying a single known artifact I could use the object detail specification documented, at https://docs.splunk.com/Documentation/SOARonprem/5.5.0/PlatformAPI/RESTQueryData#Requesting_Object_Detail  I haven't seed any similar way do specify which fields shall be retrieved while querying for an object list. Is there any way to do this?   Also, Is there any way one can query artifacts whose associated container has some properties? Right now I'm doing a massive artifact query, a massive container query and matching the results in a playbook. That's something that would be trivial and much more lighter to do by SQL-querying the underlying posrtgresql database.   Hints about this would be much appreciated.
I have data coming from a single source but I want to send the events that match a REGEX to an index and all the other that not match it to another index. I have already tried to change the order... See more...
I have data coming from a single source but I want to send the events that match a REGEX to an index and all the other that not match it to another index. I have already tried to change the order of the fields into the TRANSFORMS_ but it still put the events in both indexes. This is the content of the props.conf file: [tmpproxy] TRANSFORMS_routing1 =CIDR_Routing_matched, CIDR_Routing_others and this is the content of the transforms.conf file: [CIDR_Routing_matched] REGEX =src_host\=(?:10\.10\.10\.\d{1,3}|) FORMAT = tmp_matched_proxy DEST_KEY=_MetaData:Index WRITE_META=true [CIDR_Routing_others] REGEX =.+ FORMAT = tmp_others_proxy DEST_KEY=_MetaData:Index WRITE_META=true Is it possible to stop the TRANSFORMS_ field in the props.conf file after the first good match?
Indicates which Business Day the transaction is considered to belong to. Weekends and public holidays are often rolled forward to the next working day but it can vary according to the institution.  T... See more...
Indicates which Business Day the transaction is considered to belong to. Weekends and public holidays are often rolled forward to the next working day but it can vary according to the institution.  This is stored in the java internal time format.  Example value is 1585180800000 which represents the date 26/03/2020. This is the simple java function. Splunk solution to covert Long Date to (YYYY)MMDD value long julianDateTime = 1585180800000l; DateFormat fmt1 = new SimpleDateFormat("yyyy-MM-dd"); String dateTime = fmt1.format(new Date(julianDateTime)); System.out.println(dateTime);
Hi, I have a dashboard in Dashboard Studio but I am unable to use the magnifying glass to see the search in View mode. The dashboard has Read and Write permissions for everyone, so do not know what... See more...
Hi, I have a dashboard in Dashboard Studio but I am unable to use the magnifying glass to see the search in View mode. The dashboard has Read and Write permissions for everyone, so do not know what may be the problem.   Can you please help?     Thanks
Hi, I have been tasked to investigate what is needed to receive SAP logs in Splunk. The first thing I find when I make my first queries on google is that there is a connector called "SAP PowerConnec... See more...
Hi, I have been tasked to investigate what is needed to receive SAP logs in Splunk. The first thing I find when I make my first queries on google is that there is a connector called "SAP PowerConnect for Splunk" but when I enter https://splunkbase.splunk.com/app/3153 and try to download it I get a message saying that the download is restricted.   I also found this step by step and I would like to know what you think if the information is current because as we know about Splunk we find information on the internet but in many cases it is very old and perhaps obsolete information. https://www.wallsec.de/blog/siem-your-sap-security-audit-log-with-splunk#h.p_2Y3sy8TDSHCy   and in this last link I see a process and the truth is that the matter is complex. Solved: How to Splunk the SAP Security Audit Log - Splunk Community  
Hello, I'm having a problem where the _time field of events does not match the actual events. This happened after I rebooted the splunk server. As you can see from the pics, before the reboot the T... See more...
Hello, I'm having a problem where the _time field of events does not match the actual events. This happened after I rebooted the splunk server. As you can see from the pics, before the reboot the Time stamp, _time matches the time field After the reboot the _time stamp is 2 hours before the time field I checked the local linux server time, the user's Splunk time, they're all OK. Where does Splunk change the time of the events?
Hello I need your help for a subject.  I want to combine two search results and I need you help beacause I have a problem.  I tried this :  index="nexthink" sourcetype="st_nexthink_device"... See more...
Hello I need your help for a subject.  I want to combine two search results and I need you help beacause I have a problem.  I tried this :  index="nexthink" sourcetype="st_nexthink_device" | append [| spath "asset.last_boot_duration" output=boot | spath "asset.last_logon_duration" output=logon | spath "asset.name" output=PC | eval demarrage=boot+logon |eval date=strftime(now(),"%d/%m/%Y") |eval annee_now=mvindex(split(date,"/"),-1) | fields demarrage, PC] [ |search index=easyvista sourcetype=st_easyvista_generic "Identifiant réseau"="PCW-*" Catégorie="Borne tactile" OR Catégorie="All in One" OR Catégorie="Convertible" OR Catégorie="Odinateurs de bureau" OR Catégorie="Ordinateurs portables" OR Catégorie="Ordinateurs format micro" OR Catégorie="Workstation" | rename "Identifiant réseau" as PC "Date d'installation" as dd | eval annee=mvindex(split(dd,"/"),-1) | eval date=strftime(now(),"%d/%m/%Y") | eval annee_now=mvindex(split(date,"/"),-1) | eval difference=annee_now-annee |fields difference, PC ] | table difference PC demarrage But I have a problem with the "demarrage" field. I can't get it by combining the two searches. I want to mention that I need : -> in : index="nexthink" sourcetype="st_nexthink_device" : -----> asset.last_boot_duration -----> asset.last_logon_duration -----> demarrage = BOOT +LOGON  -> in :  index=easyvista sourcetype=st_easyvista_generic -----> date : this year  -----> annee : the year of the installation of the PC  -----> demarrage = BOOT +LOGON    Thank you for your help