All Topics

Top

All Topics

I ave a couple of scheduled reports that I SCP off of our splunk enterprise.  Both reports are in /opt/splunk/etc/apps/search/lookups.  One of the reports I setup a while ago and it's permissions loo... See more...
I ave a couple of scheduled reports that I SCP off of our splunk enterprise.  Both reports are in /opt/splunk/etc/apps/search/lookups.  One of the reports I setup a while ago and it's permissions look right and I can SCP it (file1.csv).  The new report gives me a permission denied when I try to copy it (file2.csv). File 1: -rw-r-----. 1 splunk splunk 306519 Jan 26 05:00 file1.csv -rw-------. 1 splunk splunk 1177070 Jan 26 03:00 file2.csv   Not sure how to get file2.csv group readable so I can copy it off.
Hello everyone,  I'm currently trying to optimize Splunk with disk space and index.  I read about : Changing the parameter "Pause indexing if free disk space (in MB) falls below" Never modify th... See more...
Hello everyone,  I'm currently trying to optimize Splunk with disk space and index.  I read about : Changing the parameter "Pause indexing if free disk space (in MB) falls below" Never modify the indexes.conf parameters  And some others posts of the community But I'm not quite sure about the solution for my problems :  The coldToFrozenDir/Script parameters are empty. Kind regards, Tybe
Hello Every Body.   I'm starting this question be couse i'm traying to genrate detections for goole workspace invader as that post about 365.  https://www.splunk.com/en_us/blog/security/hunting-m3... See more...
Hello Every Body.   I'm starting this question be couse i'm traying to genrate detections for goole workspace invader as that post about 365.  https://www.splunk.com/en_us/blog/security/hunting-m365-invaders-blue-team-s-guide-to-initial-access-vectors.html.  But i can not find google work space  login logs in actual ingest. We installed  the ad-don and newest apps abalaible in the splunkbase and could not find it. surfin into the splunk web we could't fund an euivalent searchs as the link attached.    Some bady had the same problem?  how can I solved it? 
Hi Splunkers, I have a problem with timestamp on our platform. Here some assumption and acquired knowledge. Knowledge _time =  is the event time (the time which is present in the event. In other w... See more...
Hi Splunkers, I have a problem with timestamp on our platform. Here some assumption and acquired knowledge. Knowledge _time =  is the event time (the time which is present in the event. In other words: the time when the event was generated. _indextime = is the index time or, if you prefer, the time when the events have been indexed. Issue with timezone shown can be related to user settings, that can be changed under username -> Preferences -> Timezone. Environment: a Splunk Cloud SaaS platform with logs ingested in different ways: Forwarder (both UF and HF) API Syslog File monitoring Issue: If I expand the event and I examinate the _time field:  Why, in my case, time event and time shown are different? Important additional Info Our user settings timezone are set on GMT+1 (due we are in Italy) for all users. You see a Windows events as sample, but the problem is present on all logs: it doesn't matter what log source I consider and how it is sending events to Splunk. Every log show time difference. The difference between _time and time shown is always on 1 hour, for every events on every log sources. I searched here on community and I found other topics about this issue, some of them has been very useful to gain a basic knowledge like Difference Between Event Time and _time  but, due we are on cloud (with limited chance to set some file and parameter that are involved) and the issue is for all events, I'm still locked on this problem.   
Hi Team, We are trying to onboard AWS cloudwatch metrics and events data to splunk , we decided to go with splunk Add on for AWS pull mechanism. I am trying to configure a custom namespace and metri... See more...
Hi Team, We are trying to onboard AWS cloudwatch metrics and events data to splunk , we decided to go with splunk Add on for AWS pull mechanism. I am trying to configure a custom namespace and metrics created in  AWS to splunk , I am unable to see the metrics there . I edited the default aws namespaces and added my custom namespace . Is this right method to add my custom metrics. Can someone guide here. 
Hi, I have a dashboard with 91 panels in different rows. The first panel is a panel created for the sole purpose of doing a base search. The search is simple: index=myIndex The other 90 panels... See more...
Hi, I have a dashboard with 91 panels in different rows. The first panel is a panel created for the sole purpose of doing a base search. The search is simple: index=myIndex The other 90 panels all do subsearches on this main search. This is to prevent CPU spikes. The problem that I'm facing right now is that the first panel is showing in the dashboard but it serves no purpose being showed. My question: How do I hide (not remove) this panel visually?  
Hi, So my task is to extract a field from a query and search for that field. That query will give an object value as a string and want to extract data from there.  In summary, I need 3 things 1.... See more...
Hi, So my task is to extract a field from a query and search for that field. That query will give an object value as a string and want to extract data from there.  In summary, I need 3 things 1.  plain query to get the data and extract a particular field. 2. Use that field as an input for the second query. 3. Get object data as a string as a result, extract fields from there, and generate a report from it in tabular format. I was able to reach till 1st step and extract the field from it.  but I am unable to search for it. below is the query I tried. sourcetype="mykube.source" "failed request"  | rex "failed request:(?<request_id>[\w-]+)" | table request_id | head 1 | eval req_query = request_id | search req_query if I try till `head 1` I get first request_id but after that result is empty for me.
I'm trying to install a few apps in Splunk.    The dialogue says, "Enter your Splunk.com username and password to download the app."   I use the username/password combination I'm using for splunk.com... See more...
I'm trying to install a few apps in Splunk.    The dialogue says, "Enter your Splunk.com username and password to download the app."   I use the username/password combination I'm using for splunk.com and I continually get back an error message "Incorrect username or password". 
Hello, I have to migrate from an old Splunk standalone instance (version 6.4) to a new one. Is it possible to use in the new instance the same license currently used in the old one in production? Is... See more...
Hello, I have to migrate from an old Splunk standalone instance (version 6.4) to a new one. Is it possible to use in the new instance the same license currently used in the old one in production? Is there anything I need to pay attention to? Thank you in advance!
I am switching from local auth to saml authentication and when logging in, the username is now a random string. How do I get it to be the "nickname" or friendly name that is provided in the saml resp... See more...
I am switching from local auth to saml authentication and when logging in, the username is now a random string. How do I get it to be the "nickname" or friendly name that is provided in the saml response? Is there a way to override the field in the saml stanza in the authentication.conf file? Changing the realName field in the authenticationResponseAttrMap_SAML stanza in the authentication.conf doesn't actually change the username. If it is not possible, how would I transfer knowledge objects to the "new" users.
I am trying to fine tune one use case "Suspicious Event Log Service Behaviour". Below is the rule logic  (`wineventlog_security` EventCode=1100) | stats count min(_time) as firstTime max(_time) as... See more...
I am trying to fine tune one use case "Suspicious Event Log Service Behaviour". Below is the rule logic  (`wineventlog_security` EventCode=1100) | stats count min(_time) as firstTime max(_time) as lastTime by dest Message EventCode | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `suspicious_event_log_service_behavior_filter` | collect index=asx sourcetype=asx marker="mitre_id=T1070.001, execution_type=adhoc, execution_time=1637664004.675815" but the rule is currently too noisy. Is it possible to set a bin time(5mins) between stop logging and start logging events. After 5mins if the logging started then I want to ignore the alerts.  Or I have seen a field named dvc_priority, can we set the alerts only for high or critical?  Help me with the query please. 
"El servidor que aloja Splunk Enterprise no tiene acceso a Internet sin restricciones por razones de seguridad. Es necesario instalar y actualizar Splunk Enterprise Security, pero me gustaría saber c... See more...
"El servidor que aloja Splunk Enterprise no tiene acceso a Internet sin restricciones por razones de seguridad. Es necesario instalar y actualizar Splunk Enterprise Security, pero me gustaría saber con qué FQDN o IP necesita comunicarse para obtener actualizaciones. Esta información es necesaria agregar esos destinos al firewall para que la comunicación no se bloquee y las actualizaciones se puedan realizar sin problemas".
I have the Hyper-V add-on installed and configured on the servers hosting Hyper-VMs, but McAfee (Trellix) Endpoint Security is blocking the creation of executable files to be run within the Windows d... See more...
I have the Hyper-V add-on installed and configured on the servers hosting Hyper-VMs, but McAfee (Trellix) Endpoint Security is blocking the creation of executable files to be run within the Windows directory. It appears a dll is being created by PowerShell.exe as part of the add-on and the ‘Access Protection’ component of McAfee sees this as a threat and blocks it. If I disable Access Protection or add PowerShell.exe to the exclusion list within McAfee, then the add-on creates a tmp file (but no visiable dll) and the configured logs are available within Splunk Enterprise. I do not want to do either of these options with McAfee and would instead prefer to change the location used by the Hyper-V add-on to be outside the Windows directory and therefore would not be considered a threat. Is this possible, or is there a better way?
We have recently configured this App on our heavy forwarder. Hitting this error "'Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))" Our splunk environment is onprem and... See more...
We have recently configured this App on our heavy forwarder. Hitting this error "'Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))" Our splunk environment is onprem and endpoint starts with https. what can be the issue? Thanks 
I have a webapp that has about 90 api calls from 1 domain, and another 50 or so api calls to a different domain.  I would like to have metrics for both, but it is becoming too cluttered. I would lik... See more...
I have a webapp that has about 90 api calls from 1 domain, and another 50 or so api calls to a different domain.  I would like to have metrics for both, but it is becoming too cluttered. I would like to have the calls for the second domain, to go into an application container of their own, instead of all the api calls going into the same application container in EUM. Is this possible? Thanks, Greg
I'm not very experienced with Splunk, but I've been asked to set up syslog forwarding from our UPS's to our Splunk server.  I've configured it with the default settings, and pointed it towards our sy... See more...
I'm not very experienced with Splunk, but I've been asked to set up syslog forwarding from our UPS's to our Splunk server.  I've configured it with the default settings, and pointed it towards our syslog server on the default syslog port. I'm able to get test logs from any severity to go through without issue, but I am unable to see any other type of logs.  NMC: AP9641 Syslog settings on device: Port: 514 Protocol : UDP   Message Generation: Enabled Facility Code: User (I've tried all the other options but I was still unable to see any logs)   Severity Mapping Critical: Critical Warning: Warning Informational: Informational  
Hello, How do I create bar chart using two fields and keep all fields in the statistical table? The column chart automatically created the following chart below. My intention is to create a repo... See more...
Hello, How do I create bar chart using two fields and keep all fields in the statistical table? The column chart automatically created the following chart below. My intention is to create a report emailed periodically with all the fields, but the column chart only two fields If I used table command only to show Name and GPA, it showed two graph, but it removed the rest of the fields Please suggest. Thanks StudentID Name GPA Percentile Email 101 Student1 4 100% Student1@email.com 102 Student2 3 90% Student2@email.com 103 Student3 2 70% Student3@email.com 104 Student4 1 40% Student4@email.com | makeresults format=csv data="StudentID,Name,GPA,Percentile,Email 101,Student1,4,100%,Student1@email.com 102,Student2,3,90%,Student2@email.com 103,Student3,2,70%,Student3@email.com 104,Student4,1,40%,Student4@email.com" Current graph Expected result    
Hello, I am brand new to Splunk and after watching a short tutorial to get started, I saw that Settings => Data Input => Local Event Log Collection did not appear on my version of Splunk Enterprise.... See more...
Hello, I am brand new to Splunk and after watching a short tutorial to get started, I saw that Settings => Data Input => Local Event Log Collection did not appear on my version of Splunk Enterprise. I have it on Mac OS Monterey and it seems to work fine, but I know most use it on Windows. Please, can someone help me find how to log local events on Splunk for Mac? Thank you for your help. Noé
Hi All, I have a sample json event in Splunk as below.  Please help me to understand how I can parse custom_tags value from below . This may have multiple key value pairs in it { account:xyz, ev... See more...
Hi All, I have a sample json event in Splunk as below.  Please help me to understand how I can parse custom_tags value from below . This may have multiple key value pairs in it { account:xyz, eventdate:01/25/2024, properties: {             version:1.0,              requestID: cvv,               response: {"statusCode":"200", "result":"{\"run_id\":465253,\"custom_tags\":{\"jobname\":\"xyz\",\"domain\":\"bgg\"}}}               time:12:55 } }
Hi Team, I have tried executing the cluster-merge-buckets command on Cluster Manager and got the following error / exception: Command: ./splunk cluster-merge-buckets -index-name _audit -startda... See more...
Hi Team, I have tried executing the cluster-merge-buckets command on Cluster Manager and got the following error / exception: Command: ./splunk cluster-merge-buckets -index-name _audit -startdate 2020/01/01 -enddate 2024/01/24 -max-count 1000 -min-size 1 -max-total-size 1024 Output: Using the following config:  -max-count=1000 -min-size=1 -max-size=1000 -max-timespan=7776000 Dryrun has started. merge_txn_id=1706209703.24 [...] peer=IDX01 processStatus=Merge_Done totalBucketsToMerge=28 mergedBuckets=28 bucketsUnableToMerge=0 createdBuckets=1 sizeOfMergedBuckets= 868MB progress=100.0% [...] peer=IDX02 processStatus=Merge_Done totalBucketsToMerge=23 mergedBuckets=23 bucketsUnableToMerge=0 createdBuckets=1 sizeOfMergedBuckets= 718MB progress=100.0% progress=100.0% peers=2 completedPeers=2 failedPeers=0 totalBucketsToMerge=51 mergedBuckets=51 bucketsUnableToMerge=0 createdBuckets=2 totalSizeOfMergedBucket s=1586MB (Additional space required for localizing S2 buckets up to the equivalent of sizeOfMergedBuckets for each peer) ---------------------------------------------------------------------------------------------------------------------------------------- Have anyone experienced the same earlier or could help me with the resolutions.