All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

write now i am getting error when i try to ping splunkdeploy.customerscallnow.com: name or service not known..i seem to follow a prety nice instruction but i am not yet able to connect 
Hi @karthikm, I suppose that you're speaking of an on-premise installation. Which Add-On are you using for the data ingestion? if I correctly remember, it's possible to define the index for each d... See more...
Hi @karthikm, I suppose that you're speaking of an on-premise installation. Which Add-On are you using for the data ingestion? if I correctly remember, it's possible to define the index for each data source by GUI, anyway, you could see the inputs.conf in tha used Add-On and see if the inputs (as tey should be!) are in two different stanzas. If not, you can override the index value finding a regex that identifies the Firewall Logs and follow the configurations described in my previous answer https://community.splunk.com/t5/Splunk-Search/How-to-change-index-based-on-MetaData-Source/m-p/619936 or other answers in Community. Ciao. Giuseppe
Hi @aditsss,  good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi Team, I am using below query: <row> <panel> <table> <search> <query>index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFile... See more...
Hi Team, I am using below query: <row> <panel> <table> <search> <query>index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval keyword=if(searchmatch("ReadFileImpl - ebnc event balanced successfully"),"True","")| eval phrase="ReadFileImpl - ebnc event balanced successfully"|table phrase keyword</query> <earliest>-1d@d</earliest> <latest>@d</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">true</option> <option name="wrap">true</option> <format type="color" field="keyword"> <colorPalette type="list">[#118832,#1182F3,#CBA700,#D94E17,#D41F1F]</colorPalette> <scale type="threshold">0,30,70,100</scale> </format> </table> </panel> </row> I want along with true and phrase  one checkmark should also come  in another column. Can someone guide me. Phrase keyword ReadFileImpl - ebnc event balanced successfully True ReadFileImpl - ebnc event balanced successfully True
The eval _raw is just to set up sample data in line with your example and is not intended for use in your dashboards.
Hi, I would like to get the list of all users, with roles and last login via splunk query. I tried the following query with a time range of "alltime" but it shows incorrect date for some users:  i... See more...
Hi, I would like to get the list of all users, with roles and last login via splunk query. I tried the following query with a time range of "alltime" but it shows incorrect date for some users:  index=_audit action="login attempt" | stats max(timestamp) by user Thank you, Kind regards Marta  
I have a HEC and I am receiving logs from CloudWatch and the default index is set to "aws". From the same HEC token I am also receiving Firewall logs from CloudWatch and these logs are also going to ... See more...
I have a HEC and I am receiving logs from CloudWatch and the default index is set to "aws". From the same HEC token I am also receiving Firewall logs from CloudWatch and these logs are also going to the index "aws". How can I transform the Firewall logs coming from the same HEC token from a different source to be assigned to index "paloalto"? I tried using the below config but it doesn't work props.conf [source::syslogng:dev/syslogng/*] TRANSFORMS-hecpaloalto = hecpaloalto disabled = false transforms.conf [hecpaloalto] DEST_KEY = _MetaData:Index REGEX = (.*) FORMAT = palo_alto I created the index palo_alto in the cluster master indexes.conf, applied cluster bundles to the indexers. And also applied the above config using deployment server to the Indexers. For some reason the logs are still going to the aws index.
Hi @GaetanVP, sincerely, it's the first time I see this command! Anyway, here you can find more infos https://community.splunk.com/t5/Security/Forgot-Pass4symmKey/m-p/378993 Ciao. Giuseppe
Hello Splunkers, I am used to use the following command to decrypt $7 Splunk configuration password such as pass4SymmKey or sslConfig.   splunk show-decrypted --value '<encrypted_value>'    I ha... See more...
Hello Splunkers, I am used to use the following command to decrypt $7 Splunk configuration password such as pass4SymmKey or sslConfig.   splunk show-decrypted --value '<encrypted_value>'    I have several questions regarding this command :  1/ Do you ever find any official documentation about it ? I was  looking here but not result : https://docs.splunk.com/Documentation/Splunk/9.1.0/Admin/CLIadmincommands 2/ Is it possible to use this command for $6 encrypted (hased ?) string, like the one stored for admin password stored in $SPLUNK_HOME/etc/passwd. I suppose it's not possible since it's a password and it should not be "reversible" for security reason. 3/ This question is related to the previous one. Is it right to say that $7 value has been encrypted since it's possible to revert it and $6 has been hashed because it's impossible to get the clear value back ? Thanks for your help ! GaetanVP
Thanks. I just refreshed but it only has the predefined values as per the search query and not as per the event data. eval _raw="\"groupByAction\": \"[{\\\"totalCount\\\": 40591, \\\"action\\\": \\\... See more...
Thanks. I just refreshed but it only has the predefined values as per the search query and not as per the event data. eval _raw="\"groupByAction\": \"[{\\\"totalCount\\\": 40591, \\\"action\\\": \\\"update_statistics table\\\"}, {\\\"totalCount\\\": 33724, \\\"action\\\": \\\"reorg index\\\"}, {\\\"totalCount\\\": 22015, \\\"action\\\": \\\"job report\\\"}, {\\\"totalCount\\\": 10236, \\\"action\\\": \\\"reorg table\\\"}, {\\\"totalCount\\\": 7389, \\\"action\\\": \\\"truncate table\\\"}, {\\\"totalCount\\\": 3291, \\\"action\\\": \\\"defrag table\\\"}, {\\\"totalCount\\\": 2291, \\\"action\\\": \\\"sp_recompile table\\\"}, {\\\"totalCount\\\": 2172, \\\"action\\\": \\\"add range partitions\\\"}, {\\\"totalCount\\\": 2088, \\\"action\\\": \\\"update_statistics index\\\"}, {\\\"totalCount\\\": 2069, \\\"action\\\": \\\"drop range partitions\\\"}]\""   The above data is only available in the dashboard and not the latest event data
Hi, Just to confirm/enquire more on this, what you meant is that we will be creating a service/script to run on the particular server ? Or there is already a Splunk default config file which  have t... See more...
Hi, Just to confirm/enquire more on this, what you meant is that we will be creating a service/script to run on the particular server ? Or there is already a Splunk default config file which  have the settings for us to edit. 
Hi, Thanks for the info. "There will be negative impacts on performance if the forwarder workload requires memory more than the specified limit."  - This was also one of our concerns as we do have ... See more...
Hi, Thanks for the info. "There will be negative impacts on performance if the forwarder workload requires memory more than the specified limit."  - This was also one of our concerns as we do have some UF that are configured to monitor quite a number of locations eg. > 20.   In your experience, so far how much memory might be used in this case?  So far I have seen Splunk services using up to 4GB on Windows server and impacting other constructs, but the cause was due to the Splunk UF installation was not installed properly and causing memory leak.
With a dashboard you can set the refresh interval so that the values in the dashboard are refreshed as the search is being re-run. But that should be used with caution especially if there are many pa... See more...
With a dashboard you can set the refresh interval so that the values in the dashboard are refreshed as the search is being re-run. But that should be used with caution especially if there are many panels in your dashboard, the interval is short and the searches are "heavy".
Hi @aditsss, let me understand: you want a table where each row contains only: "ebnc event balanced successfully! and "True"? in this case, you can use: index="600000304_d_gridgain_idx*" sourcetyp... See more...
Hi @aditsss, let me understand: you want a table where each row contains only: "ebnc event balanced successfully! and "True"? in this case, you can use: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | eval keyword=if(searchmatch("ReadFileImpl - ebnc event balanced successfully"),"True","False") | eval phrase="ReadFileImpl - ebnc event balanced successfully" | table phrase keyword then, if you want, you could add other fields to the table command (e.g. _time). Ciao. Giuseppe
I just refreshed the data and the event has changed as below, but the table still has the old values. Other than "real-time search" is there a way to update the dashboard dynamically based on the ev... See more...
I just refreshed the data and the event has changed as below, but the table still has the old values. Other than "real-time search" is there a way to update the dashboard dynamically based on the event values in the JSO "groupByAction": "[{\"totalCount\": 41117, \"action\": \"update_statistics table\"}, {\"totalCount\": 33793, \"action\": \"reorg index\"}, {\"totalCount\": 22015, \"action\": \"job report\"}, {\"totalCount\": 10252, \"action\": \"reorg table\"}, {\"totalCount\": 8609, \"action\": \"truncate table\"}, {\"totalCount\": 3335, \"action\": \"defrag table\"}, {\"totalCount\": 2628, \"action\": \"add range partitions\"}, {\"totalCount\": 2522, \"action\": \"drop range partitions\"}, {\"totalCount\": 2465, \"action\": \"sp_recompile table\"}, {\"totalCount\": 2227, \"action\": \"update_statistics index\"}]"
@gcusello  I dont want count I want like this: ebnc event balanced successfully                                  True ebnc event balanced successfully                                   True ebnc... See more...
@gcusello  I dont want count I want like this: ebnc event balanced successfully                                  True ebnc event balanced successfully                                   True ebnc event balanced successfully                                   True means whenever "ebnc event balanced successfully" occur TRUE keyword should be there.  
Hi @aditsss, when you found th events, you can use the stats command: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-ra... See more...
Hi @aditsss, when you found th events, you can use the stats command: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval keyword=if(searchmatch("ReadFileImpl - ebnc event balanced successfully"),"True","") | stats count but, as I said, if you put the keyword in the main search, all the results are true, so you need only the count of events, so you could semplify your search: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | stats count It's different if you want to count the occurrences of the string and also the other events, but you have to modify the main search: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | eval keyword=if(searchmatch("ReadFileImpl - ebnc event balanced successfully"),"True","False") | stats count BY keyword  Ciao. Giuseppe
Hi @PaulaCom, did you already see this url on Lantern: https://lantern.splunk.com/Splunk_Platform/Product_Tips/Upgrades_and_Migration/Migrating_from_on-premises_to_Splunk_Cloud_Platform  https://l... See more...
Hi @PaulaCom, did you already see this url on Lantern: https://lantern.splunk.com/Splunk_Platform/Product_Tips/Upgrades_and_Migration/Migrating_from_on-premises_to_Splunk_Cloud_Platform  https://lantern.splunk.com/Splunk_Platform/Splunk_Cloud_Platform_Migration/Overview ? There's also this document but it's very high level: https://www.splunk.com/en_us/customer-success/splunk-cloud-platform-migration.html  Ciao. Giuseppe
I get it they want to distinguish between server and forwarder with different owners, but in my opinion that is way too late now. First the scenario of having both server and forwarder together must... See more...
I get it they want to distinguish between server and forwarder with different owners, but in my opinion that is way too late now. First the scenario of having both server and forwarder together must be a tiny tiny fraction of what is setup out there - finding it hard to see why you would at all. Second more than a decade with splunk as the owner makes this change something you should not take lightly if you have just a little respect for your customers. The impact on automation, security and plain familiarity far outweigh the need to separate the server and forwarder. At least this should have been an option not the default.  Then the unprofessional way of doing this in a minor patch without any warnings just makes me furious. I get the same ticks from when they also in a minor patch changed from initd to systemd.  I created a support case asking them to revert or come up with some options. 
Morning All  I've been asked to document everything we have on Splunk Platform (on prem) before moving to the cloud. Has anyone been in similar position and where did they start??  Any pointers wou... See more...
Morning All  I've been asked to document everything we have on Splunk Platform (on prem) before moving to the cloud. Has anyone been in similar position and where did they start??  Any pointers would be appreciated    Thank you