All Topics

Top

All Topics

Hi people, I wonder whether it is possible to run a query that generates a set of n-sample of events for each sourcetype in an index? In some sense, if the log data has been ingested and conformed ... See more...
Hi people, I wonder whether it is possible to run a query that generates a set of n-sample of events for each sourcetype in an index? In some sense, if the log data has been ingested and conformed properly, this is perhaps not so problematic, you might build a datamodel or just query across the relevant CIM field (alias.) So lets get specific:   index-someIndex sourcetype=someSourceType | enumerate against some defined key value, say an eventtype | enumerate all of the eventtypes and pull out any subeventtypes | list the for 2-5 events for each subeventtype, else just list the 2-5 events for the the eventtype | table _time, eventtype, subeventtype (NULL if blank), event  
Have anyone an idea how to fix this?  Any suggestion? Thank you    
I have a splunk event with below format: { message { DATE: 2023-07-20T11:53:04 } } I want to find all the events that have the above DATE field in a particular range. However below query is no... See more...
I have a splunk event with below format: { message { DATE: 2023-07-20T11:53:04 } } I want to find all the events that have the above DATE field in a particular range. However below query is not yielding any results. Is something wrong with it? BASE SEARCH | message.DATE >= strftime("2023-07-20T11:50:04","%Y-%m-%dT%H:%M:%S") AND message.DATE <= strftime("2023-07-20T11:56:04","%Y-%m-%dT%H:%M:%S")
Hi Splunk support, I recently added my second forwarder. Everything was perfectly done. Only one thing is, the newly added forwarder is not listing in client (forwarder management). After restarted ... See more...
Hi Splunk support, I recently added my second forwarder. Everything was perfectly done. Only one thing is, the newly added forwarder is not listing in client (forwarder management). After restarted also, it remains the same. Screen captures attached herewith. Please give the proper solution on this. Thanks, Ragav
hi, have a qn  in the below query | makeresults count=730 | streamstats count | eval _time=_time-(count*86400) | timechart Count as Timestamp span=1mon | join type=left _time [| savedsearch XYZ | e... See more...
hi, have a qn  in the below query | makeresults count=730 | streamstats count | eval _time=_time-(count*86400) | timechart Count as Timestamp span=1mon | join type=left _time [| savedsearch XYZ | eval today = strftime(relative_time(now(), "@d"), "%Y-%m-%d %H:%M:%S.%N") | where like (APP_NAME ,"Managed iMAP Application") and like (BS_ID,"%") and like (Function,"%") and like (DEPARTMENT_LONG_NAME,"%") and like (COUNTRY,"%") and like(EMPLOYEE_TYPE,"%") and STATUS="active" | eval _time = strptime(FROM_DATE, "%Y-%m-%d %H:%M:%S.%N") | eval _time!= "2023-07" | timechart Count as Created span=1mon | streamstats sum(Created) as Createdcumulative] | join type=left _time [| savedsearch XYZ | where like (APP_NAME ,"Managed iMAP Application") and like (BS_ID,"%") and like (Function,"%") and like (DEPARTMENT_LONG_NAME,"%") and like (COUNTRY,"%") and like(EMPLOYEE_TYPE,"%") and STATUS="inactive" | eval _time = strptime(TO_DATE, "%Y-%m-%d %H:%M:%S.%N") | timechart Count as Deactivated span=1mon | streamstats sum(Deactivated) as Deactivatedcumulative] | eval Active = Createdcumulative | eval Deactivated = Deactivatedcumulative | where _time>=relative_time(now(),"-1y@d") | fields - Createdcumulative, Deactivatedcumulative, Timestamp the below query fetches me the results below:  i need to restrict the data till the previous month and not show current month. can anyone help me with modifying the query pls
 I've got Splunk Universal Forwarder up and running on my DC-01, and it's set to forward all Windows event logs to Splunk. But there's a catch - it's not forwarding the Security events for some reaso... See more...
 I've got Splunk Universal Forwarder up and running on my DC-01, and it's set to forward all Windows event logs to Splunk. But there's a catch - it's not forwarding the Security events for some reason! Interestingly, when I installed the UF on a regular Windows PC, everything worked like a charm, and all event types, including Security events, were forwarded without a hitch.  I've done my fair share of digging through documentation and troubleshooting cases, but I'm still at a loss. It feels like it might be a permissions or rights issue, but I can't seem to find the root cause. If any of you have encountered a similar issue or have any insights, I'd be incredibly grateful for your help and ideas. Thank you in advance for any guidance you can provide!
I am getting a value from my data that a number buts actually the duration how do I convert into minuets hours and days.     
Splunk App for Lookup File Editing version 3.6.0 works fine, but version 4.0.1 does not use root_endpoint. Downgrade to 3.6.0 and it works fine.   settings: splunk:   conf:     - key: web ... See more...
Splunk App for Lookup File Editing version 3.6.0 works fine, but version 4.0.1 does not use root_endpoint. Downgrade to 3.6.0 and it works fine.   settings: splunk:   conf:     - key: web       value:         directory: /opt/splunk/etc/system/local         content:           settings:           root_endpoint: /splunk   Expected results:  https://server/splunk/ko-KR/app/lookup_editor/lookup_edit...... v4.0.1 : https://server/app/lookup_editor/lookup_edit......
I'm new to Splunk Enterprise, and my task is to forward logs from Splunk HF (AWS EC2 instance) to an AWS Cloud Watch log group. I tried to export the logs using CLI commands and stored them on the ... See more...
I'm new to Splunk Enterprise, and my task is to forward logs from Splunk HF (AWS EC2 instance) to an AWS Cloud Watch log group. I tried to export the logs using CLI commands and stored them on the Splunk HF server locally. Then, I used the Cloud Watch agent to send the logs to the Cloud Watch log group. please refer the below Splunk cli command for export the logs #./splunk search "index::***** sourcetype::linux_audit" -output rawdata -maxout 0 -max_time 5 -auth splunk:***** >> /opt/linux-Test01.log The challenge I'm facing is that when I run the CLI command using a Linux crontab, it does not export the logs. Are there any other solutions or guidance available to resolve this issue?
I'm trying to complete the lab for my cybersecurity course. I googled few thing for this question, but this question doesn't seem to accept the answer. It is a course from Immersive labs. May be i'm ... See more...
I'm trying to complete the lab for my cybersecurity course. I googled few thing for this question, but this question doesn't seem to accept the answer. It is a course from Immersive labs. May be i'm doing something wrong or any problem with my query. I'm not sure.  I've used the query:- index="_audit" action=* info=* | stats count by user Need your help with this to search login attempts for username=admin.
how can i in the props.conf file tell Splunk to take the second timestamp as opposed to the first
Hi Is anybody can tell me what is the goal of this regex? | regex ImagePath="\\\\\\\\" As far as I know, it seems to search a character chain delimited by 4 backslash? Thanks  
Hello, Does anyone know how to delete an authorization token for no more exisiting account in Splunk? We have tried it in Web, but Splunk "Could not get info for non-existent user" We have ... See more...
Hello, Does anyone know how to delete an authorization token for no more exisiting account in Splunk? We have tried it in Web, but Splunk "Could not get info for non-existent user" We have tried it on servers, too: For curl -k -u <username>:<password> -X DELETE https://<server>:<management_port>/services/authorization/tokens/<token_user> -d id=<token_id> we get: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Could not find object id=xxxxx</msg> </messages> </response> Is there any dir or file where authentication tokens are saved on Search Heads?  We need to get rid of internal errors that we receive for this non-existent user, but without token removal it will not be possible Many thanks in advance for help! Greetings, Justyna
Hi,   Where We can get the community edition of the SPLUNK SOAR as on OVA image for the virtual box.   Thank You, Bhushan Kale
Hello, I am attempting to make a dashboard that will simply show if a host/server is up or down. Basically have a box that is green or red for each server.  Most threads I have seen are fairly old s... See more...
Hello, I am attempting to make a dashboard that will simply show if a host/server is up or down. Basically have a box that is green or red for each server.  Most threads I have seen are fairly old so I am hoping there is a an easier way to show this in either XML or in Dashboard Studio.   Thanks
There is a complicated requirement for me, the splunk beginner. Hope you can give me some advice. The splunk version: 9.0.2303.201 Since there are a lot of logs(events) that meet my search requi... See more...
There is a complicated requirement for me, the splunk beginner. Hope you can give me some advice. The splunk version: 9.0.2303.201 Since there are a lot of logs(events) that meet my search requirement, I want to generate a time chart with those logs.  I want to group those logs by a specific field named "field1": For events in group A, their "field1" value is unique when compared with all other events; For events in group B, their "field1" value has been repeated once when compared with other events, which means when I search the value of "field1"(group B),  it will return two events. Based on this premise,  I want to count the event that happened times of both two groups, and display them in a timeline(time chart), what can I do?
Following the documentation here: https://docs.splunk.com/Documentation/Splunk/latest/RESTTUT/RESTsearches#Create_a_search_job I expect that a successful REST API call to endpoint "/services/searc... See more...
Following the documentation here: https://docs.splunk.com/Documentation/Splunk/latest/RESTTUT/RESTsearches#Create_a_search_job I expect that a successful REST API call to endpoint "/services/search/jobs" would return a single job ID as the document shows. However, in my testing, when the call returns with a status of 200 (success), the response data contains an object, which contains 6 keys: Object.keys(jobId) = (6) ['links', 'origin', 'updated', 'generator', 'entry', 'paging'] where, jobId.entry is an array of hundreds of search jobs -- basically the call to create a search job returned a list of all the jobs in the search head. The code (JavaScript) is in this public repository: https://github.com/ww9rivers/splunk-rest-search Am I missing anything? Thank you for your insights!
Hi, Distributed deployment that includes SH Cluster and IDX Cluster, HEC on IDXs is used to receive the data. I want to use ingest time lookups BUT the lookup will need to be refreshed (let's say ... See more...
Hi, Distributed deployment that includes SH Cluster and IDX Cluster, HEC on IDXs is used to receive the data. I want to use ingest time lookups BUT the lookup will need to be refreshed (let's say hourly). Now the question is how will that work? SHs can refresh a lookup and it will be pushed as part of the search bundle to the IDXs, but I don't think IDXs will know how to use it for ingest time lookup (as this bundle is used during search time), would they? The only option I can think of is to run the scheduled search that populates the lookup on Cluster Master but tell it to output the lookup into the `slave_apps` folder, but that will require to push a new IDX bundle every time.....   Any thoughts on how to do it? Thanks.
 We have a requirement to pull security logs for past specific the time ranges -  i.e from December 2022 - Apr 2023, Splunk cannot complete a search without expiring for even a 1 hour window in Decem... See more...
 We have a requirement to pull security logs for past specific the time ranges -  i.e from December 2022 - Apr 2023, Splunk cannot complete a search without expiring for even a 1 hour window in December.   This fails our published 12 month retention period for these logs.  Please provide options for how to Identify, correct, or Improve this Search challenge.       The search job 'SID' was canceled remotely or expired.       Sometimes the GUI shows "Unknown SID".  The version currently used is 8.2.9.
I would like to forward logs from sources coming from udp inputs in a Heavy Forwarder to two splunk clouds with different index names each one. I have a fortinet source coming from server1 (fortine... See more...
I would like to forward logs from sources coming from udp inputs in a Heavy Forwarder to two splunk clouds with different index names each one. I have a fortinet source coming from server1 (fortinet index), a eset source coming from server2 (eset index) and others sending logs to a Heavy Forwarder with udp inputs. For the forwarding to splunk clouds I have two splunkclouduf.spl installed for each one that configure the forwarding at Heavy Forwarding. So I have these apps installed on the Heavy Forwarder: 100_foo1_splunkcloud and 100_foo2_splunkcloud. I would like to: - Send all logs to foo1.splunkcloud.com with predefined index. - Send only fortinet and eset sources to foo2.splunkcloud.com and change the index to foo2_fortinet and foo2_eset respectively. For this scenario I propose this config: $SPLUNK_HOME=/opt/splunk /opt/splunk/etc/system/local/props.conf     [host::server1] TRANSFORMS-routing1=app_foo1 TRANSFORMS-routing2=fortigate_foo2_index,app_foo2 [host::server2] TRANSFORMS-routing3=app_foo1 TRANSFORMS-routing4=eset_foo2_index,app_foo2     /opt/splunk/etc/system/local/transforms.conf     [app_foo1] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=<splunkcloud_foo1> [app_foo2] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=<splunkcloud_foo2> [fortigate_foo2_index] REGEX=. DEST_KEY=_MetaData:Index FORMAT=foo2_fortinet [eset_foo2_index] REGEX=. DEST_KEY=_MetaData:Index FORMAT=foo2_eset     /opt/splunk/etc/apps/100_foo1_splunkcloud/local/outputs.conf     [tcpout:splunkcloud_foo1] <sslPassword for foo1>     /opt/splunk/etc/apps/100_foo2_splunkcloud/local/outputs.conf     [tcpout:splunkcloud_foo2] <sslPassword for foo2>     /opt/splunk/etc/apps/100_foo1_splunkcloud/default/outputs.conf     [tcpout:splunkcloud_foo1] server = <bunch of 15 balanced server of foo1.splunkcloud>     /opt/splunk/etc/apps/100_foo2_splunkcloud/default/outputs.conf     [tcpout:splunkcloud_foo2] server = <bunch of 15 balanced server of foo2.splunkcloud>     Is a valid configuration for the scenario?