All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@bowesmana  Ok, Thanks for your reply. I understand. Appreciated for your reply. VM shouldn't be in values and combine, rest of the column's should combine if result values match, and show visualiz... See more...
@bowesmana  Ok, Thanks for your reply. I understand. Appreciated for your reply. VM shouldn't be in values and combine, rest of the column's should combine if result values match, and show visualize.  I'm still looking for some alternative options here.    Regards,
@jamos_bt - Here are some key pointers to keep in mind as a developer of App Search head cluster meaning, 3 or more SHs being in sync with each other for configuration and lookups. Splunk handles ... See more...
@jamos_bt - Here are some key pointers to keep in mind as a developer of App Search head cluster meaning, 3 or more SHs being in sync with each other for configuration and lookups. Splunk handles the configuration sync automatically as far as you follow the practice, detailed below. Your App will be installed from another Splunk machine called "Deployer". You can ask the user to make some config on the deployer directly, but you don't need to as far as your configs are getting synced properly. To ensure the configuration is in sync keep this in mind: Do not make config file modifications directly on the system, use Splunk Rest endpoints to make changes to config files. Including your App's configuration page if any should only make changes via Rest endpoint. Do not make lookup file modifications directly on the system, use either Splunk rest endpoints or outputlookup command to make changes to lookups. Your alerts will be executed only on 1 instance, decided by the SHC captain at runtime. And it could be different all the time. Your dashboard should work as is as long as you are not doing anything crazy.   I hope this helps!!!
Hello, Is it possible to get the serial numbers of windows/linux machines being ingested to splunk using the splunk add-on for windows or linux?   Thanks  
You can't merge a single column across 2 other columns, as in your vm4/vm4 example. You can do | stats values(VM) as VMs by col1 col2 | sort VMs but it will give you separate rows for vm4/vm5
@bowesmana  For more simplify it for you, split by VM and I'm looking to merge the values into one, example. I have two values as 'car' those should be into one value and in single box. as like sam... See more...
@bowesmana  For more simplify it for you, split by VM and I'm looking to merge the values into one, example. I have two values as 'car' those should be into one value and in single box. as like same if the result values matches should come. | stats values(col1) values(col2) is not helping as which is combination of values coming.   Regards, 
To split by VM, just change it like this | stats values(col*) as col* by VM  
Hi @glc_slash_it  Thanks for your reply, It is giving a combination of several fields, but duplicates are showing up. I want to get rid of duplicates where two values matches and to show up as a si... See more...
Hi @glc_slash_it  Thanks for your reply, It is giving a combination of several fields, but duplicates are showing up. I want to get rid of duplicates where two values matches and to show up as a single result value instead of two combinations. And I want to display it by VM (in my example, VM columns will always be unique).   Regards,
I'm not here to help, sorry man. I have the same problem. Did you have the solution now? TIA
@VatsalJagani Thank you for your quick reply. But as a .net platform soft engine, I would like perfer to use the C# SDK. I feel so confuse why deprecate the C# SDK.  We plan to use C# SDK of splu... See more...
@VatsalJagani Thank you for your quick reply. But as a .net platform soft engine, I would like perfer to use the C# SDK. I feel so confuse why deprecate the C# SDK.  We plan to use C# SDK of splunk to implement some new features.  Seem like we have to change the solution.  Thank you again. 
Sounds like you may have the old version still lingering around in <app>/local/data/ui/views/
Our custom app had changes to the views and these changes are not getting updated. I have zipped the custom app and followed the install from file process. The custom app passed the AppInspection ver... See more...
Our custom app had changes to the views and these changes are not getting updated. I have zipped the custom app and followed the install from file process. The custom app passed the AppInspection version 3.0.3 after I figured out how to run the slim generate-manifest command. It took a few tries to get it correct, but I have uploaded this custom app to Splunk Cloud. When I use the app, I expect the latest xml code for our custom views to be used, but the data is not displaying correctly in the chart. When I click on Open in search icon, I get an old version of the view search query, so that explains why the chart looks funny.  Has anyone dealt with this before? Are there tricks to clearing out the obsolete views when uploading a new version? I have incremented the minor and release versions, based on other reasons. I do know the cloud expects the versions to increment. Our last working version was 1.0.115 and my current version is 1.1.7. 
my Linux webserver is running Apache and I'd like to Splunk to analyze the logs. I'm using the "Splunk App for Web Analytics". I followed the documentation and imported my Apache log files and instal... See more...
my Linux webserver is running Apache and I'd like to Splunk to analyze the logs. I'm using the "Splunk App for Web Analytics". I followed the documentation and imported my Apache log files and installed the "Splunk Add-on for Apache Web Server". My Apache logs are getting properly parsed in Splunk and updated the eventtype web-traffic to point to the logs  by source type. I'm running into a problem configuring the Web Analytics app. It found two log files (access_log and ssl_access_log) and i pointed them to the site's domain. access_log appears to be configured correctly but ssl_access_log gives the error "Site not configured". lastly, running "Generate user sessions" and "generate pages" shows zero events. There are no results in any of the App dashboard menus, but i do see plenty of logs in the raw search. Any idea what's going on? Here are two screen shots of my configs:
It is correct somewhat, I'm trying to 1:1 for the two specific columns. The use it just to start with the two columns matching at first, then another where they do not. Where Qui-gonn Jinn is in both... See more...
It is correct somewhat, I'm trying to 1:1 for the two specific columns. The use it just to start with the two columns matching at first, then another where they do not. Where Qui-gonn Jinn is in both Sith and Jedi indexes and listed in both columns. For some reason I thinking I might be making this more difficult than it needs to be. If the two IDs match in both columns then they are listed with the rest. Hopefully that clear is up. I am still trying to relearn the whole search in Splunk currently so I do apologize.
Hi @tharun.santosh , Please use the relative path in the Dashboard setting. Please try this path Please use Hardware Resources|Cluster|Pods count for value in above screenshot. Seems like ... See more...
Hi @tharun.santosh , Please use the relative path in the Dashboard setting. Please try this path Please use Hardware Resources|Cluster|Pods count for value in above screenshot. Seems like you are using some incorrect path in path as in the above screenshot. Please use "Hardware Resources|Cluster|Pods count" Relative path documentation https://docs.appdynamics.com/appd/21.x/21.3/en/appdynamics-essentials/alert-and-respond/configure-health-rules/define-custom-metrics-for-multiple-entities Thanks, Satbir
Hi @Rajkumar.Varma , As of now, it does not look like we have any functionality to send such details as a report. However, please check if the below helps- 1. You can view the license usage on t... See more...
Hi @Rajkumar.Varma , As of now, it does not look like we have any functionality to send such details as a report. However, please check if the below helps- 1. You can view the license usage on the controller as per the below documentation. https://docs.appdynamics.com/21.9/en/appdynamics-essentials/appdynamics-licensing/observe-license-usage 2. You can also use our REST APIs to get the details of the usage of the licenses. Please refer to the below article. https://docs.appdynamics.com/21.9/en/extend-appdynamics/appdynamics-apis/license-api Thanks, Satbir
https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/HowSAMLSSOworks
Would an aggregated resulting dataset be sufficient for your ask? I tried to do what I think you are asking by utilizing a stats command to aggregate data from the two indexes together but has just ... See more...
Would an aggregated resulting dataset be sufficient for your ask? I tried to do what I think you are asking by utilizing a stats command to aggregate data from the two indexes together but has just a compressed overview of the analysis. Example of output with simulation data: To achieve this with the base searches you provided would look like this. (index=sith broker sithlord!=darth_maul) OR (index=jedi domain="jedi.lightside.com" (master!="yoda" AND master!="mace" AND master="Jinn")) | fields + _time, index, Jname, saber_color, domain, master, strengths, mentor, skill, mission, Sname, strength, teacher, actions | tojson str(saber_color) str(domain) str(master) str(actions) str(mentor) str(mission) str(skill) str(strength) str(strengths) str(teacher) output_field=unique_field_combos_json | fields + _time, index, Jname, Sname, unique_field_combos_json | eval name=coalesce('Jname', 'Sname') | stats min(_time) as earliest_event, max(_time) as latest_event, count as total_count, count(eval('index'=="jedi")) as jedi_count, count(eval('index'=="sith")) as sith_count, values(index) as indexes, dc(index) as dc_indexes, latest(eval(case('index'=="jedi", unique_field_combos_json))) as jedi_unique_field_combos_json, latest(eval(case('index'=="sith", unique_field_combos_json))) as sith_unique_field_combos_json by name | eval scenario=if( 'dc_indexes'==1, case( 'indexes'=="jedi", "Jedi Only", 'indexes'=="sith", "Sith Only" ), "Jedi and Sith" ) | foreach *_unique_field_combos_json [ | eval unique_field_combos_json=if( isnotnull('<<FIELD>>'), mvappend( 'unique_field_combos_json', json_set('<<FIELD>>', "type", "<<MATCHSTR>>") ), 'unique_field_combos_json' ) ] | fields - *_unique_field_combos_json | mvexpand unique_field_combos_json | fromjson unique_field_combos_json | fields - unique_field_combos_json | fields + name, type, scenario, total_count, jedi_count, sith_count, saber_color, domain, master, actions, mentor, mission, skill, strength, strengths, teacher | stats values(*) as * by name | fields + name, type, scenario, *_count, saber_color, domain, master, actions, mentor, mission, skill, strength, strengths, teacher | eval scenario_sort=case( 'scenario'=="Jedi and Sith", 1, 'scenario'=="Jedi Only", 2, 'scenario'=="Sith Only", 3 ) | sort 0 +scenario_sort | fields - scenario_sort   To generate the simulation data was a doozy since I dont have a datagen setup right now but was able to put something together using build in splunk commands. SPL used to simulate for reference. | makeresults count=1000 | eval low=1, high=[ | makeresults | eval index="sith", fields_to_gen=split("Sname|saber_color|strength|teacher|actions", "|") | append [ | makeresults | eval index="jedi", fields_to_gen=split("Jname|saber_color|strengths|mentor|skill|mission|master|domain", "|") ] | mvexpand fields_to_gen | fields - _time | eval value_format=if( match('fields_to_gen', "^[A-Z]name$"), "name", 'fields_to_gen' ) | rename fields_to_gen as fieldname | tojson str(fieldname) str(value_format) output_field=field_format_json | fields + index, field_format_json | stats values(field_format_json) as field_format_json by index | eval field_format_json_array="[".mvjoin(field_format_json, ",")."]" | fields - field_format_json | streamstats count as index_number_assignment | stats max(index_number_assignment) as index_count | return $index_count ], rand=round(((random()%'high')/'high')*('high'-'low')+'low') | fields - low, high | rename rand as index_number_assignment ``` distribute timestamps ``` | streamstats count as iter | eval _time=now()-('iter'/10) | join type=left index_number_assignment [ | makeresults | eval index="sith", fields_to_gen=split("Sname|saber_color|strength|teacher|actions", "|") | append [ | makeresults | eval index="jedi", fields_to_gen=split("Jname|saber_color|strengths|mentor|skill|mission|master|domain", "|") ] | mvexpand fields_to_gen | fields - _time | eval value_format=if( match('fields_to_gen', "^[A-Z]name$"), "name", 'fields_to_gen' ) | rename fields_to_gen as fieldname | tojson str(fieldname) str(value_format) output_field=field_format_json | fields + index, field_format_json | stats values(field_format_json) as field_format_json by index | tojson str(index) str(field_format_json) output_field=json | streamstats count as index_number_assignment | fields + index_number_assignment, json ] | fromjson json | fields - json, index_number_assignment | eval json=json_object() | foreach mode=multivalue field_format_json [ | eval fieldname=spath('<<ITEM>>', "fieldname"), json=json_set(json, 'fieldname', spath('<<ITEM>>', "value_format")."_") ] | fields - field_format_json | spath input=json | fields - json, fieldname | fields + index, * | foreach *name [ | eval low=1, high=5, rand=round(((random()%'high')/'high')*('high'-'low')+'low'), <<FIELD>>='<<FIELD>>'.'rand' | fields - low, high, rand ] | foreach * [ | eval low=1, nested_high=10, nested_rand=round(((random()%'nested_high')/'nested_high')*('nested_high'-'low')+'low'), high='nested_rand', rand=round(((random()%'high')/'high')*('high'-'low')+'low'), <<FIELD>>=if( NOT match("<<FIELD>>", "[A-Z]name$") AND NOT "<<FIELD>>"=="index", '<<FIELD>>'.'rand', '<<FIELD>>' ) | fields - low, high, rand, nested_high, nested_rand ] | eval Jname=if( 'index'=="jedi" AND 'Jname'=="name_1", "name_unique_jedi", 'Jname' ), Sname=if( 'index'=="sith" AND 'Sname'=="name_2", "name_unique_sith", 'Sname' ) ``` (index-=sith broker sithlord!=darth_maul) OR (index=jedi domain="jedi.lightside.com" (master!="yoda" AND master!="mace" AND master="Jinn")) | fields + _time, index, Jname, saber_color, domain, master, strengths, mentor, skill, mission, Sname, strength, teacher, actions ``` | tojson str(saber_color) str(domain) str(master) str(actions) str(mentor) str(mission) str(skill) str(strength) str(strengths) str(teacher) output_field=unique_field_combos_json | fields + _time, index, Jname, Sname, unique_field_combos_json | eval name=coalesce('Jname', 'Sname') | stats min(_time) as earliest_event, max(_time) as latest_event, count as total_count, count(eval('index'=="jedi")) as jedi_count, count(eval('index'=="sith")) as sith_count, values(index) as indexes, dc(index) as dc_indexes, latest(eval(case('index'=="jedi", unique_field_combos_json))) as jedi_unique_field_combos_json, latest(eval(case('index'=="sith", unique_field_combos_json))) as sith_unique_field_combos_json by name | eval scenario=if( 'dc_indexes'==1, case( 'indexes'=="jedi", "Jedi Only", 'indexes'=="sith", "Sith Only" ), "Jedi and Sith" ) | foreach *_unique_field_combos_json [ | eval unique_field_combos_json=if( isnotnull('<<FIELD>>'), mvappend( 'unique_field_combos_json', json_set('<<FIELD>>', "type", "<<MATCHSTR>>") ), 'unique_field_combos_json' ) ] | fields - *_unique_field_combos_json | mvexpand unique_field_combos_json | fromjson unique_field_combos_json | fields - unique_field_combos_json | fields + name, type, scenario, total_count, jedi_count, sith_count, saber_color, domain, master, actions, mentor, mission, skill, strength, strengths, teacher | stats values(*) as * by name | fields + name, type, scenario, *_count, saber_color, domain, master, actions, mentor, mission, skill, strength, strengths, teacher | eval scenario_sort=case( 'scenario'=="Jedi and Sith", 1, 'scenario'=="Jedi Only", 2, 'scenario'=="Sith Only", 3 ) | sort 0 +scenario_sort | fields - scenario_sort
To perform rolling restart of SH cluster use, splunk rolling-restart shcluster-members To check the current status of rolling restart use,  splunk rolling-restart shcluster-members -status 1
Many thanks for all your inputs. It is working as expected
Hi Everybody, Maybe a noob question, when I configure the Javascript agent I noticed that you just have to copy paste a script in the main page of your web app, the AppKey value is included in tha... See more...
Hi Everybody, Maybe a noob question, when I configure the Javascript agent I noticed that you just have to copy paste a script in the main page of your web app, the AppKey value is included in that script, but this AppKey is visible if you open the dev tools of any browser, is there any problem or risk if I let the AppKey visible in my web app?, any suggestion on how to hide it? I'm working with Sveltekit, but I guess it will be the same for most javascript frameworks.