All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @schose  Were you able to resolve the issue of the navigation menu labels updating? I'm running into the same issue.
Hi @splunkreal  If using the raw endpoint then _raw will be whatever is sent from the source. Different Splunkbase / Custom apps can perform different field extractions depending on the source of th... See more...
Hi @splunkreal  If using the raw endpoint then _raw will be whatever is sent from the source. Different Splunkbase / Custom apps can perform different field extractions depending on the source of the data.  Are you sending a particular type of log or from a specific vendor/tool via Kafka? I'd be happy to investigate if there is an appropriate add-on to export the data for it. Note, however, that Kafka may result in the data not being in the original format and thus might not extract correctly and might need further work. Please let us know what the source data is in and I'd be happy to help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hey @uagraw01, I understand that you might not have liked the solution. But it wasn't chatgpt based. If you would have read correctly, I mentioned that I used it as a solution and it worked perfectl... See more...
Hey @uagraw01, I understand that you might not have liked the solution. But it wasn't chatgpt based. If you would have read correctly, I mentioned that I used it as a solution and it worked perfectly in my scenario. Community is for helping every member and that's what I tried honestly. If you would have tried the solution, you could have observed how it would help you with the situation.   Anyways, I hope you get the solution that you need outside of ChatGPT. Happy Splunking >. Thanks, Tejas.
Since your multivalue fields appear to be coming from one lookup (Alarm_list_details_scada_mordc.csv) you could try something like this | lookup Alarm_list_details_scada_mordc.csv component_type_id ... See more...
Since your multivalue fields appear to be coming from one lookup (Alarm_list_details_scada_mordc.csv) you could try something like this | lookup Alarm_list_details_scada_mordc.csv component_type_id AS statistical_subject OUTPUTNEW operational_rate technical_rate maximum_duration minimum_duration alarm_severity | eval row=mvrange(0,mvcount(operational_rate)) | mvexpand row | foreach operational_rate technical_rate maximum_duration minimum_duration alarm_severity [| eval <<FIELD>>=mvindex(<<FIELD>>,row) ] | fields - row  
@isoutamo is exactly right about the 9.2 changes! To help troubleshoot this further, you should check a few things to understand why the forwarders aren't connecting properly to the DS. Start by test... See more...
@isoutamo is exactly right about the 9.2 changes! To help troubleshoot this further, you should check a few things to understand why the forwarders aren't connecting properly to the DS. Start by testing connectivity from each forwarder using telnet or netcat to make sure they can actually reach the deployment server on port 8089. Next, examine your serverclass.conf on the Deployment Server to verify that your forwarders match the whitelist criteria and that the client matching is configured properly. Many times the issue is that the serverclass isn't set up to recognize your specific forwarders. On the forwarder side, run btool deploymentclient to see what configuration is actually being applied. This will show you if there are any conflicting settings or if the deploymentclient.conf isn't pointing where you expect it to. If your deployment server is forwarding its internal logs to your indexer, you might also need to add the indexAndForward settings in outputs.conf on the DS, as this can affect how deployment client data appears in the management UI after 9.2. Just to confirm, are you also managing your Search Head and indexer through the Deployment Server? And is this truly a distributed setup with separate VMs, or multiple Splunk instances on one box? That architecture detail might help explain what you're seeing. If this Helps Please Upvote!
Hi have you look this https://docs.splunk.com/Documentation/Splunk/9.4.2/Updating/Upgradepre-9.2deploymentservers ? There was some changes on 9.2 how DS has stored client information. This leads a... See more...
Hi have you look this https://docs.splunk.com/Documentation/Splunk/9.4.2/Updating/Upgradepre-9.2deploymentservers ? There was some changes on 9.2 how DS has stored client information. This leads also in situation where you see those deployment clients on your SH as it get that information from your indexer's indexes (I suppose that you have forwarded all logs to indexer). r. Ismo
Normally spunk runs on your nodes all IPs unless you define it differently. So in splunk point of view you don't need to do anything. But usually nodes and network put firewalls between network segm... See more...
Normally spunk runs on your nodes all IPs unless you define it differently. So in splunk point of view you don't need to do anything. But usually nodes and network put firewalls between network segments and also in hosts. I suppose that you are running this in linux, so you need to check if there is local firewalld or iptables or some other host based fw in use. Just add those ports 8000 (for GUI) and 9997 (for input data from UFs) or what port you are using for ingesting data.  Based on your OS windows/linux+distro this has done different way. Also you should check if your sources (UFs) are in different network segments and if your users (laptops/workstation) are different segment then open access to those ports. Also there could be some web proxies in use which didn't allow traffic into your Splunk sever GUI (port 8000).
@heathramos Thanks for the update, glad it worked out. 
FYI It looks like the dashboards are now working changing the datamodel at every step and adding the index reference fixed the issue thanks for the help
I modify @uagraw01 your answer to moving your SPL inside </> block in editor. In that way it's much more readable. Can you check that it is still correct and there haven't missed anything! Could yo... See more...
I modify @uagraw01 your answer to moving your SPL inside </> block in editor. In that way it's much more readable. Can you check that it is still correct and there haven't missed anything! Could you In future use that </> block always when you are adding some SPL, dashboards etc. With it we can ensure that what we see is what you have written, not something what editor has changed!
I am facing the same issue, has anyone solved it?
hi @ITWhisperer Its hard for me to share complete events. FYI to you  Area, zone, equipment fields are coming from the index search while other fields are coming through lookups.
@tej57 Hi Tejas thanks for the answer  but I don't want chatgpt or any other AI promt answers.
Hi @gcusello  | datamodel Mmm_availability adapto_shuttle_alarm flat | lookup Alarm_list_adapto_details.csv ERROR_ID as ALARMID OUTPUTNEW DESCRIPTION Max_duration Min_duration OPERATIONAL TECHN... See more...
Hi @gcusello  | datamodel Mmm_availability adapto_shuttle_alarm flat | lookup Alarm_list_adapto_details.csv ERROR_ID as ALARMID OUTPUTNEW DESCRIPTION Max_duration Min_duration OPERATIONAL TECHNICAL | rename Max_duration as MAX_DURATION Min_duration as MIN_DURATION DESCRIPTION as description ID as id | eval matchField = BLOCK."".SHUTTLEID | lookup mordc_topo modified_field as matchField OUTPUTNEW ID "Parent Description" as Name Asas as asas Weight as Weight modified_field as isc_id | table _time ID ALARMID description OPERATIONAL TECHNICAL Name Weight asas isc_id event_time MAX_DURATION MIN_DURATION state | append [ search index=mess sourcetype="EquipmentEventReport" "EquipmentEventReport.EquipmentEvent.Detail.State" IN("CAME_IN","WENT_OUT") | spath input=_raw path=EquipmentEventReport.EquipmentEvent.ID.Location.PhysicalLocation.AreaID output=area | spath input=_raw path=EquipmentEventReport.EquipmentEvent.ID.Location.PhysicalLocation.ZoneID output=zone | spath input=_raw path=EquipmentEventReport.EquipmentEvent.ID.Location.PhysicalLocation.EquipmentID output=equipment | search area=* | dedup _raw | lookup mordc_site_specific_scada_alarms.csv MsgNr as MsgNr OUTPUTNEW Alarmtext Functiongroup | eval zone=if(len(zone)==1,"0".zone,zone), equipment=if(len(equipment)==1,"0".equipment,equipment) | eval isc_id=area.".".zone.".".equipment | fields _time, isc_id, area, zone, equipment start_time element error error_status description event_time state MsgNr Alarmtext Functiongroup alarm_severity | fields - _raw, Alarmtext Functiongroup, MsgNr | lookup isc id AS isc_id OUTPUTNEW statistical_subject mark_code | eval statistical_subject = trim(statistical_subject) | lookup Alarm_list_details_scada_mordc.csv component_type_id AS statistical_subject OUTPUTNEW operational_rate technical_rate maximum_duration minimum_duration alarm_severity | search alarm_severity IN ("High", "Medium") | lookup mordc_topology.csv modified_field as isc_id OUTPUTNEW ID "Description" as Name Asas as asas Weight as Weight | rename operational_rate as OPERATIONAL technical_rate as TECHNICAL maximum_duration as MAX_DURATION minimum_duration as MIN_DURATION | table _time ID description OPERATIONAL TECHNICAL Name Weight asas isc_id event_time MAX_DURATION MIN_DURATION state Alarmtext Functiongroup] Giving below multi value field results.
Hello, I had set up a Distributed Search setup in VirtualBox with a Search Head, indexer and Deployment Server.  Initially the forwarders were showing up in the deployment server as phoned home. Bu... See more...
Hello, I had set up a Distributed Search setup in VirtualBox with a Search Head, indexer and Deployment Server.  Initially the forwarders were showing up in the deployment server as phoned home. But after restarting I see that no clients are coming up in DS, instead they are showing up in the Search Head's Forwarder Management. I checked the deploymentclient.conf and the IP points towards the Deployment Server.  I tried removing the deployment-apps in Search Head and restarting but I think as it's in a Distributed Search mode the folder is automatically getting created.
Thanks @livehybrid  then Splunk should parse correctly fields for addons? Do you mean _raw will be the original event from source host and sending to targered index/sourcetype?
Hi @splunkreal  Are you using Splunk Connect for Kafka? If so you should be able to set it to use raw HEC endpoint: "splunk.hec.raw" : "true", For more info check out https://help.splunk.com/en/sp... See more...
Hi @splunkreal  Are you using Splunk Connect for Kafka? If so you should be able to set it to use raw HEC endpoint: "splunk.hec.raw" : "true", For more info check out https://help.splunk.com/en/splunk-cloud-platform/get-data-in/splunk-connect-for-kafka/2.2/configure/configuration-examples-for-splunk-connect-for-kafka  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @dinesh001kumar  To remove any uncertainty, it *is* possible to have custom css/js within apps in Splunk Cloud, however I dont think you can upload the files via the Splunk Cloud UI, instead you ... See more...
Hi @dinesh001kumar  To remove any uncertainty, it *is* possible to have custom css/js within apps in Splunk Cloud, however I dont think you can upload the files via the Splunk Cloud UI, instead you will need to package the files within your custom app in <app_name>/appserver/static directory.  Once packaged and uploaded to Splunk Cloud this should work the same as you may have previously used on-premise with Splunk Enterprise.  For more information check out https://docs.splunk.com/Documentation/SplunkCloud/latest/AdvancedDev/UseCSS As @bowesmana stated, if you are on Victoria experience Splunk Cloud stack then your app should sail through appsinpect without having to be manually inspected.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, is it possible in Splunk HEC from Kafka to receive raw events on HF in order to parse fields with addons? It seems we can only receive json data with "event" field and may not be able to extr... See more...
Hello, is it possible in Splunk HEC from Kafka to receive raw events on HF in order to parse fields with addons? It seems we can only receive json data with "event" field and may not be able to extract fields within standard addons? The HEC event may also contain target index and sourcetype. Thanks.  
Hi @SN1  I would start by adding a console.log(rowKey); and also one after searchQuery - console.log(searchQuery); and then validate that these are outputting what you expect. Can you check to see... See more...
Hi @SN1  I would start by adding a console.log(rowKey); and also one after searchQuery - console.log(searchQuery); and then validate that these are outputting what you expect. Can you check to see if this prints out what you are expecting and let us know how you get on as this might help drill down further.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing