All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I know that Forwarders 6.x are out of support and that from the documentation they are not compatible with Indexer 9.x, but does anyone knows if a Light Forwarder is able to communicate and send even... See more...
I know that Forwarders 6.x are out of support and that from the documentation they are not compatible with Indexer 9.x, but does anyone knows if a Light Forwarder is able to communicate and send events to an Indexer 9.x? Thanks!
Hi Folks,   I have quick architectural question, do think is a good idea set an architecture with a ES search head on aws cloud and the indexer on - premise? thanks for your reply  
Hi, I am doing the sending alert if a machine has no activity in the span = 1h. I configure to send it each hour. The thing is if the machine has no activity at 7:00, it will send the alert every... See more...
Hi, I am doing the sending alert if a machine has no activity in the span = 1h. I configure to send it each hour. The thing is if the machine has no activity at 7:00, it will send the alert every hour (7h, 8h, 9h, etc) saying the same message that the machine has no activity at 7:00 Is anyway to send it once if the message is always the same (in this case, machine has no activity at 7:00). If the machine is restarted, it has activities from 10:00 - 15:00, then it downs, I will receive an alert saying that machine has no activity at 15:00)   Thanks in advanced.
Hello, I put them in context before showing the query. I have a splunk that I test on it to see the query results because I don't have access to the splunk that has the data. I have a query that ... See more...
Hello, I put them in context before showing the query. I have a splunk that I test on it to see the query results because I don't have access to the splunk that has the data. I have a query that shows me the result of these two hostnames, but I need this same result but on all hostnames, not just these two.   I have 2 queries. The first query gets me the results of the two teams, although I don't know if it does it because I have the data inserted (I can't find it by index) or it puts them because I use the makeresults (I read that it works in cache and the data doesn't have to be). |makeresults | eval EventCode="20", hostname="wdv01ssps,DESCASSOAW01", error_code="0x80070003 0x80004004"|makemv delim="," hostname | makemv delim=" " EventCode|makemv delim=" " error_code | mvexpand EventCode |mvexpand hostname |mvexpand error_code|table hostname EventCode error_code   I'd like to use the latter as it's easier for me to display results from hostname, in this case it's called ComputerName. How can I do to show all ComputerName with these same filters? index=sistemi sourcetype="wineventlog" TaskCategory="Windows Update Agent" AND EventCode IN (20, 27) | eval day_of_week = lower(strftime(_time, "%A")) | eval date_string = strftime(_time, "%Y-%m-%d") | eval Weekend=if(day_of_week="saturday" OR day_of_week="sunday",1,null()) | search Weekend=1 | stats count by Message EventCode ComputerName date_string | stats list(Message) by ComputerName date_string EventCode It may simply not be possible to list all the computer names without listing them one by one. Thanks.
Hi All, How do I get this screen for eval? Regards Suman P.
Hi community, I have 2 data sources, 1 from a csv to get the list of district (include number of population according to each district). Other sources come from PostgreSQL. The common info is the d... See more...
Hi community, I have 2 data sources, 1 from a csv to get the list of district (include number of population according to each district). Other sources come from PostgreSQL. The common info is the district. After a lookup csv, I have the list of district, for ex 6 districts. Knowing that 5 districts have the equivalent population (ex 500), another district has only 100 people living there. I want to do the span later, to count the activities of each district and send an alert if there is no activity of a district. But the difficulty is the span is not the same amongs all the districts.  I want to let span =1 day for 5 districts which have 500 people, and 5 days for the district with 100 population.  In a same search, can I do a case or if else to separate 2 cases? Here is what I'm doing: |dbxquery connection="database" query=" SELECT * FROM table" |lookup lookup.csv numero OUTPUT DISTRICT |eval list_district = "1,2,3,4,5,6" |eval split_list_district= split(list_district,",") |mvexpand split_list_district |where DISTRICT=split_list_district |eval _time=strptime(time_receive,"%Y-%m-%dT%H:%M:%S.%N") |eval _comment="Can we do something here to separate 2 cases" |bin _time span=1h |chart sum(count_activity) as count by _time DISTRICT ......
       
Hi, I have a question on 'fields' please.    sourcetype=* status IN ("200", "400","500") | fields -status | stats count by status   The SPL is not removing the 'status' from the output whil... See more...
Hi, I have a question on 'fields' please.    sourcetype=* status IN ("200", "400","500") | fields -status | stats count by status   The SPL is not removing the 'status' from the output while the below one is removing. Why isn't the first working?   sourcetype=* status IN ("200", "400","500") | fields - status | stats count by status   Regards Suman P.
Hi Community, I have 2 mvfields, how can I search for all the values in the first mvfield to all the values in the second mvfield? index=animals  | eval all_animals = mvappend('animal1', 'anim... See more...
Hi Community, I have 2 mvfields, how can I search for all the values in the first mvfield to all the values in the second mvfield? index=animals  | eval all_animals = mvappend('animal1', 'animal3', 'animal3') | table id_animals all_animals id_animals all_animals 001 dog goat cow 002 tiger lion 003 parrot snake boar   index=pets  | eval all_pets = mvappend('pet1', 'pet2') | table id_pets all_pets id_pets all_pets A1 parrot mouse A2 dog cat   result: id_animals animals id_pets 001 dog cat mouse A2 002 tiger lion NO MATCH 003 parrot snake A1
Hi all, My search results are formatted similar to that of HTML, eg: <last_modified_date>1669004771000</last_modified_date><assigned_group>Test Group 1</assigned_group><assigned_support_company>C... See more...
Hi all, My search results are formatted similar to that of HTML, eg: <last_modified_date>1669004771000</last_modified_date><assigned_group>Test Group 1</assigned_group><assigned_support_company>Company 1</assigned_support_company><assigned_support_organization>Analytics</assigned_support_organization><assignee>John Doe</assignee> I would like to split these results including their headings in <> I have tried rex and other commands, but am stuck
While investigating logs coming in from an OSSEC server I found that the `SPLUNK_TA_ossec` alters data erroneously. The investigated event is for Rule 18149 from a Windows server. The original user ... See more...
While investigating logs coming in from an OSSEC server I found that the `SPLUNK_TA_ossec` alters data erroneously. The investigated event is for Rule 18149 from a Windows server. The original user is `WINSERVER01$` - as we know a "machine account" as indicated by the trailing "$"-sign. The `SPLUNK_TA_ossec` (current version is 4.1.0) just strips off the dollar sign in `transforms.conf` in the `[kv_for_default_ossec]` stanza and shows the user as `WINSERVER01` just like a normal username. Now in a search that filters out machine accounts like `NOT user=*$` these accounts are shown and counted anyway. => Error
I have the following search queries:       API Error Alert --------------- index=myindex sourcetype=my-app:app |spath message | regex message="^.*Error while creating account.*$" |dedup my_id... See more...
I have the following search queries:       API Error Alert --------------- index=myindex sourcetype=my-app:app |spath message | regex message="^.*Error while creating account.*$" |dedup my_id_field API Down Alert --------------- index=myindex sourcetype=my-app:app | spath message | regex message="^.*api-down.*$" | dedup my_id_field Update API Error ------------------ index=myindex sourcetype=my-app:app | spath message | regex message="^.*Error while updating trial account.*$" | dedup my_id_field        I have some more of the same kind. It is checking against multiple messages using. regular expressions. Now I would like to create an email alert for all these events and would like combine all these into one query and so I can create a single alert rather than creating individual alerts. How can I combine these queries ? It should trigger the email alert if any of these conditions is true. I have tried the following, but it is not working.        index=myindex sourcetype=my-app:app |spath message | regex message="^.*Error while creating account.*$" | regex message="^.*api-down.*$"|regex message="^.*Error while updating trial account.*$" |regex message="^.*JWT Token creation failed with error.*$" |regex message="^.*Error while fetching IPLookU.*$"      
Hi everyone, currently, i am trying to expand one of the multiple field values but i am getting the result with the below error. Field 'deployment' does not exist in the data. index=json |rex... See more...
Hi everyone, currently, i am trying to expand one of the multiple field values but i am getting the result with the below error. Field 'deployment' does not exist in the data. index=json |rex mode=sed "s/.*-\s//g" |spath |rename ops{}.steps{}.steps{}.address{}.deployment as deployment  |mvexpand deployment |mvexpand operation |table deployment
Hi, We were using Splunk App for AWS which Splunk has stopped supporting and is now a legacy app. So as Splunk recommended to migrate to Splunk App for Content Packs(which includes Content Pack for ... See more...
Hi, We were using Splunk App for AWS which Splunk has stopped supporting and is now a legacy app. So as Splunk recommended to migrate to Splunk App for Content Packs(which includes Content Pack for AWS Reports and Dashboards) , migrating to this requires ITSI or IT Essentials Work as a pre-requisite to integrate and use the Content Packs. ITSI is a paid app and IT Essentials work has no version which is compatible with Splunk Enterprise 9.0. Why has Splunk demised a free app and as an alternative, we are forced to move to paid version (indirectly because of ITSI)
Hello hello, We have the Splunk db connect app working in our environment, but suddenly stops working And I can see this log: 2022-11-21 23:22:39.050 -0500 [dw-203047 - PUT /api/inputs/server_a... See more...
Hello hello, We have the Splunk db connect app working in our environment, but suddenly stops working And I can see this log: 2022-11-21 23:22:39.050 -0500 [dw-203047 - PUT /api/inputs/server_average_latency] ERROR c.s.d.m.repository.DefaultConfigurationRepository - action=failed_to_get_the_conf reason=HTTP 401 -- call not properly authenticated com.splunk.HttpException: HTTP 401 -- call not properly authenticated at com.splunk.HttpException.create(HttpException.java:84) at com.splunk.DBXService.sendImpl(DBXService.java:131) at com.splunk.DBXService.send(DBXService.java:43) at com.splunk.HttpService.get(HttpService.java:154) at com.splunk.Entity.refresh(Entity.java:381) at com.splunk.Entity.refresh(Entity.java:24) at com.splunk.Resource.validate(Resource.java:186) at com.splunk.Entity.validate(Entity.java:462) at com.splunk.Entity.getContent(Entity.java:157) at com.splunk.Entity.size(Entity.java:416) at java.util.HashMap.putMapEntries(HashMap.java:501) at java.util.HashMap.<init>(HashMap.java:490) at com.splunk.dbx.model.repository.JsonMapperEntityResolver.apply(JsonMapperEntityResolver.java:34) at com.splunk.dbx.model.repository.JsonMapperEntityResolver.apply(JsonMapperEntityResolver.java:18) at com.splunk.dbx.model.repository.DefaultConfigurationRepository.get(DefaultConfigurationRepository.java:92) at com.splunk.dbx.server.dbinput.task.DbInputTaskLoader.load(DbInputTaskLoader.java:63) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.update(InputServiceImpl.java:221) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.update(InputServiceImpl.java:38) at com.splunk.dbx.server.api.resource.InputResource.updateInput(InputResource.java:81) at sun.reflect.GeneratedMethodAccessor482.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43... CONTINUES   I am not going to add the whole log because is huge.   We have a cluster and our dbconnect is install at the search heads and all the inputs are configured at the Heavy Forwarder.   Is you have any idea what I can check to see what is the issue, please let me know.   The environment is over linux.   Thanks in advance. Best Regards.
Hello, I am using the Splunk Victoria experience and am attempting to install the n-1 version of an app to Splunk search head. When uploading the file for installation, i receive the following erro... See more...
Hello, I am using the Splunk Victoria experience and am attempting to install the n-1 version of an app to Splunk search head. When uploading the file for installation, i receive the following error.  This app is available for installation directly from Splunkbase. To install this app, use the App Browser page in Splunk Web. My question is how can install a non-current version of an app to Splunk? Or how to by pass this message to upload this app manually?  For Reference i am installing the 8.5.0 version of "Splunk Add-on for Unix and Linux", but this will need to be done for multiple apps.  Thank you 
We have a tool that writes to a cloud splunk indexer, but we are trying to migrate to a onprem system. The current system requires that we write to both at the same time, but unfortunately both index... See more...
We have a tool that writes to a cloud splunk indexer, but we are trying to migrate to a onprem system. The current system requires that we write to both at the same time, but unfortunately both indexers have setup different index names for the data. I've tried updating the files created by the spl install with the new tcp, but this solution seems to ignore one or the other index causing issues on the indexer in question. I've also tried having them as 2 different setups in the app directory, but then only one of the indexer receives information while the other is ignored. Is there a way to send the same data to 2 different instances, one in the cloud and one on prem, with each expecting a different index?
I'm collecting the System logs from a Windows 2012 R2 DHCP Server using Splunk Universal forwarder 9.0.1.0 to a Splunk Enterprise 8.2.5 indexer. Initially I was collecting all logs using this stanza ... See more...
I'm collecting the System logs from a Windows 2012 R2 DHCP Server using Splunk Universal forwarder 9.0.1.0 to a Splunk Enterprise 8.2.5 indexer. Initially I was collecting all logs using this stanza       [WinEventLog://System] start_from = oldest disabled = 0 current_only = 0       As per the inputs.conf guide here it states that for the allowed key values in an WindowsEvent log monitoring stanza we may use SourceName The source of the entity that generated the event. Corresponds to Source in Event Viewer.   I wished to reduce the number of events collected to only those related to DHCP. When I look into the windows event viewer I see this event for example However, if I search for any event in my indexed data from this server with        SoureName = DHCP-Server       there are no results. A simple query over all time like this shows this       index=windowsdhcp | stats count by SourceName       Howver, in the putput of the stats of all Sourcenames I see this Sourcename namely Microsoft-Windows-DHCP-Server. So I got back to windows event vierwer and look at the XML view of the event which is as follows:       - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="Microsoft-Windows-DHCP-Server" Guid="{6D64F02C-A125-4DAC-9A01-F0555B41CA84}" EventSourceName="DhcpServer" /> <EventID Qualifiers="0">1376</EventID> <Version>0</Version> <Level>3</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2022-11-21T17:14:28.000000000Z" /> <EventRecordID>1236733</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>vmsys305.vhihealthcare.net</Computer> <Security /> </System> - <EventData> <Data>10.119.6.0</Data> <Data>89</Data> <Data>6</Data> </EventData> </Event>       There is no mention of the event with Source=Microsoft-Windows-DHCP-Server. The Providername is the same however. I then press the Copy button which yeilds the following pasted data (to notepad):       Log Name: System Source: Microsoft-Windows-DHCP-Server Date: 21/11/2022 21:13:18 Event ID: 1376 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: vmsys609.vhihealthcare.net Description: IP address range of scope 10.119.6.0 is 89 percent full with only 6 IP addresses available. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-DHCP-Server" Guid="{6D64F02C-A125-4DAC-9A01-F0555B41CA84}" EventSourceName="DhcpServer" /> <EventID Qualifiers="0">1376</EventID> <Version>0</Version> <Level>3</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2022-11-21T21:13:18.000000000Z" /> <EventRecordID>93904</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>vmsys609.vhihealthcare.net</Computer> <Security /> </System> <EventData> <Data>10.119.6.0</Data> <Data>89</Data> <Data>6</Data> </EventData> </Event>       So now we have our SourceName=Source mapping. Summary So this leads me to belive that the sttaement from this guide regarding the SourceName being equivelent to the Source as seen in Windows event viewer requires some clarifcaiton namely that it is as viewed from the output of the copy of the xml view of the event not the default output.  
we are using AWS ECS with fargate and trying to siphon out the container logs to out splunk cloud instance using fluentd.  1. on the aws ecs side, below is the task_definition.json to create servi... See more...
we are using AWS ECS with fargate and trying to siphon out the container logs to out splunk cloud instance using fluentd.  1. on the aws ecs side, below is the task_definition.json to create services in ECS -     { "family": "sample-springboot-ms-app", "taskRoleArn": "arn:aws:iam::958993399264:role/ecs-task-role", "executionRoleArn": "arn:aws:iam::958993399264:role/ecsTaskExecutionRole", "networkMode": "awsvpc", "containerDefinitions": [ { "name": "sample-springboot-ms-app", "image": "958993399264.dkr.ecr.us-east-1.amazonaws.com/dev-repository:finance-sample-springboot-ms-v1-0-0-700950146", "cpu": 0, "portMappings": [ { "containerPort": 8080, "hostPort": 8080, "protocol": "tcp" } ], "essential": true, "entryPoint": [], "command": [], "environment": [ { "name": "APP_CONFIG_VALUE", "value": "12" }, { "name": "START_UP_DELAY", "value": "9" }, { "name": "SIMPLE_TEST", "value": "sample-test-value" } ], "environmentFiles": [], "mountPoints": [], "volumesFrom": [], "secrets": [], "logConfiguration": { "logDriver": "awsfirelens" } }, { "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "debaspreet-debug-fluentd", "awslogs-region": "us-east-1", "awslogs-stream-prefix": "splunk-ecs" } }, "image": "958993399264.dkr.ecr.us-east-1.amazonaws.com/dev-repository:fluent-701086531", "firelensConfiguration": { "type": "fluentd", "options": { "config-file-type": "file", "config-file-value": "/fluent.conf" } }, "essential": true, "name": "log_router", "memory": 256, "memoryReservation": 128 } ], "requiresCompatibilities": [ "FARGATE" ], "cpu": "1024", "memory": "2048", "runtimePlatform": { "operatingSystemFamily": "LINUX" } }         2. on the fluentd side, below is the fluent.conf -     <system> log_level info </system> <match **> @type splunk_hec protocol https hec_host **************** hec_port 8088 hec_token ***************** index debaspreet host_key ec2_instance_id source_key ecs_cluster sourcetype_key ecs_task_definition insecure_ssl true <fields> container_id container_name ecs_task_arn source </fields> <format> @type single_value message_key log add_newline false </format> </match>       3. below is the docker file for our custom fluend image that we host in ECR -     FROM splunk/fluentd-hec:1.2.0 ADD fluent.conf /fluent.conf     Despite of the above configs, we still dont see the container logs in splunk. Not sure whats incorrect in the config or whats missing. Out splunk cloud instance has been setup correctly because we do see the below post message there -     curl -k https://****************.com:8088/services/collector/event -H "Authorization: Splunk ****************" -d '{"event": "hello world"}'     Any pointers as to why this config isnt working ? Thanks