All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @jip31, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @jip31, yes, for more infos see at https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/SearchReference/Trendline Ciao. Giuseppe
Hai Team/ @Ryan.Paredez  I have developed .NET sample MSMQ sender and receiver standalone application. I have tried Instrumenting that application. I could load the profiler and was able to see th... See more...
Hai Team/ @Ryan.Paredez  I have developed .NET sample MSMQ sender and receiver standalone application. I have tried Instrumenting that application. I could load the profiler and was able to see the MQ Details and transaction snapshots for sender application, but was unable to get MQ details for receiver application in AppDynamics controller. But we are expecting MSMQ Entry point for .NET consumer application. I have tried resolving the metrics issue by adding Message Queue entry points which AppDynamics has been mentioned in the below link, https://docs.appdynamics.com/appd/21.x/21.7/en/application-monitoring/configure-instrumentation/transaction-detection-rules/message-queue-entry-points Please look into this issue and help us to resolve this. Thanks in advance.
Hi Gcusello So could you confirm me that I can add a trendline in a table panel like this (I can test it for the moment...) | timechart span=12h count ,values(sourcetype) | rename values(sourcetype... See more...
Hi Gcusello So could you confirm me that I can add a trendline in a table panel like this (I can test it for the moment...) | timechart span=12h count ,values(sourcetype) | rename values(sourcetype) as sourcetype | trendline sma10(count) as trend | table _time count trend sourcetype Thanks
wow - I like that - looks perfect thank you for the ideas 
I do agree with you and that was a first practical choice I was thinking of. There is a risk I may be over engineering an answer, but the goal would be to  get a simple view of "where the gaps are".... See more...
I do agree with you and that was a first practical choice I was thinking of. There is a risk I may be over engineering an answer, but the goal would be to  get a simple view of "where the gaps are".   I find the more elaborate visualisations particularly helpful as they allow me to take dry technical topics and present to non technical people, who often control budgets, and I can automate the creation of data and feed into existing automated workflows, alerting, ticketing and so on; which for me results in the approach running itself so I can move on to something else.  This is really the my goal in practice. Really appreciate the quick feedback - starting simple sound very sensible!
It depends on what your goal is. Because an app as such is actually a bunch of files stored together for: 1) Ease of management (so that you can easily deploy them as a single ap or remove as one; ... See more...
It depends on what your goal is. Because an app as such is actually a bunch of files stored together for: 1) Ease of management (so that you can easily deploy them as a single ap or remove as one; also for proper configuration file precedence) 2) Permission management (so you can grant permissions for particular apps to specific roles) But from your question I suppose you want to capture a state of the whole system in order to be able to recreate some specific reports or visualizations within a single app. It's not that easy. 1. Reports/visualizations from one app will most probably rely on stuff from other apps (extractions, calculated fields, lookups, maybe datamodels). So just one app is usually not enough on its own. 2. Of course the report/dashboard is created from the data stored at a given point in time in your Splunk system so you'd have to not only find out which data it is and properly copy it out (let's for now leave aside the topic of techincal details of copying it out), but also make sure that the data does not "age" and is not rotated to frozen and removed from the system over time on the destination system. 3. Usually reports and dashboards use searches with time ranges defined relatively to current moment (like "from a week ago to this day's beginning"). If you execute such search in two weeks time you will surely get different results simply because the search will be run against completely different set of data even though you might still have the original data in place. So it's more complicated than it seems. If you need to capture some state at this point I'd rather think about exporting the results of some reports or screenshots of your dashboards - that's static data which is guaranteed not to change over time. Otherwise you'll have to solve all those problems I mentioned earlier.  
| regex user="^[A-L]"
Hi @harimadambi, if you want to copy data from an index to another, you can create a scheduled search (with the frequency you prefer for updates) using the collect command: index=your_staring_index... See more...
Hi @harimadambi, if you want to copy data from an index to another, you can create a scheduled search (with the frequency you prefer for updates) using the collect command: index=your_staring_index | collect index=your_new_index Sincerely, it isn't so clear for me why do you want to do this: I understand the app backup, but usually indexes are under a general backup policy and there isn't the requirement of backupping only one index, but It's only a my idea, you can copy the events of one or more indexes in another one and use it for your purposes. Ciao. Giuseppe
OK. Even with version 7.x of the receivers (indexers/HFs), the 9.0 UF should be relatively OK. Just remember that 9.0 UF introduced the configtracker mechanism which produces events for the _configt... See more...
OK. Even with version 7.x of the receivers (indexers/HFs), the 9.0 UF should be relatively OK. Just remember that 9.0 UF introduced the configtracker mechanism which produces events for the _configtracker index which is normally not available in older splunk installations so you'll end up with events either discarded (with a warning) or sent to a last-chance index.
It depends on what you are trying to visualise in your data. What insights would a chart help you find? For example, a simple column chart with x-axis as your number and y-axis as used/unused might... See more...
It depends on what you are trying to visualise in your data. What insights would a chart help you find? For example, a simple column chart with x-axis as your number and y-axis as used/unused might be sufficient. Depending whether you are more interested in use or not you might assign the values the other way around. Also, if you mainly have use and are looking for the occasional gap, you might want unused to show up as a spike rather than a dip.
@gcuselloLuckily, Splunk is quite resourceful and can optimize some searches on its own. For example - take this search from my local splunk at home index=_internal host=backup1.local | search so... See more...
@gcuselloLuckily, Splunk is quite resourceful and can optimize some searches on its own. For example - take this search from my local splunk at home index=_internal host=backup1.local | search source="/var/log/audit/audit.log" if you get to job details dashboard you will see this: As you can see - the chained searches have been merged into a single search which will be performed in the map phase (normally would be pushed to indexers but my environment is all-in-one in this case). I wouldn't normally rely on Splunk's ability and would try to make the search "good" anyway but it's worth knowing that chaining searches does not necessarily hurt the performance on its own. Of course if you do something in between like | search | calculate_some_fields | search from_those_fields It won't be optimized out because you still have to calculate those fields first so YMMV. So it's not that easy
@gcusello  Thank you for the answer. I'm managing a multisite indexer cluster which hold many customer projects data and their visualization. I would like to create a snapshot of a particular project... See more...
@gcusello  Thank you for the answer. I'm managing a multisite indexer cluster which hold many customer projects data and their visualization. I would like to create a snapshot of a particular project for a specific date in the past and it should be kept for future. Why I'm calling it as snapshot means I need the same visualizations along with full data in another index.  So my idea was to create another index and copy the data from the soruce index say demo to destination index say demo_snapshot by collect command. But there I can see some data loss.  So I would like to know is there any suggestions I can get from Splunk community to achieve my target. Thank you,
There are a number of viz that may help. The Itsy Bitsy App for Splunk - https://splunkbase.splunk.com/app/5256 is an example app that shows some interesting simple XML/CSS techniques. Also, not su... See more...
There are a number of viz that may help. The Itsy Bitsy App for Splunk - https://splunkbase.splunk.com/app/5256 is an example app that shows some interesting simple XML/CSS techniques. Also, not sure if Treemap would work for your use case https://splunkbase.splunk.com/app/3118  
Hi Muhammed, Can you check the db.log file and server.log files? and can you upload it here? Can you also write the db, controller and platform versions?
Hi @harimadambi, if an app is correctly created without private objects, can be easily backupped taking files from $SPLUNK_HOME/etc/apps/your_app. For data is a little more difficoult because you h... See more...
Hi @harimadambi, if an app is correctly created without private objects, can be easily backupped taking files from $SPLUNK_HOME/etc/apps/your_app. For data is a little more difficoult because you have to know which indexes are used by the App and then backup them. You can find all the indexes in:  $SPLUNK_DB/db/your_index for hot and warm data $SPLUNK_DB/colddb/your_index for cold data SPLUNK_DB by default is $SPLUNK_HOME/var/lib/splunk, but probably it's different in your installation. Ciao. Giuseppe
Hi @Dhivakarpn ... as said in previous reply... most linux's should be ok to install UF (unless the linux older than 10 years)..  one another concern is... whats ur indexer version..    You should... See more...
Hi @Dhivakarpn ... as said in previous reply... most linux's should be ok to install UF (unless the linux older than 10 years)..  one another concern is... whats ur indexer version..    You should make sure UF to indexer is compatible.  https://docs.splunk.com/Documentation/Forwarder/9.0.1/Forwarder/Compatibilitybetweenforwardersandindexers  
Hi basically this means that your daily indexing amount is greater than you license allow. What will happen is totally dependent on your splunk version and size of your license. With free version a... See more...
Hi basically this means that your daily indexing amount is greater than you license allow. What will happen is totally dependent on your splunk version and size of your license. With free version after you have gotten 5 violation in 30days you cannot do searches until there is 30d period when there are less than 5 violations.  Enterprise with license less than 100GB in recent versions that limits is 45/60days. If you get more violations your searches will blocked, but you can ask reset license from your Splunk account manager. If your license is 100GB+ then you will get those warnings, but your search are still working. Some older version with paid license that was 5/30d and then you could ask reset license unless you have "non blocking" license. With it it act like current 100GB+ license. r. Ismo
Is it possible to create backup the app with data and visualization for a specific date to keep for a future date ?
Hi Forum, I want to chart a list - say for example  {1..100}  and represent this in a mosaic type visual presentation., if a number has been used, or not. So I would probably look to introduce a s... See more...
Hi Forum, I want to chart a list - say for example  {1..100}  and represent this in a mosaic type visual presentation., if a number has been used, or not. So I would probably look to introduce a second dimension, 1 = used , 0 = unused. Punch card looks interesting - anyone done anything similar - maybe ip addressing or something else?  my use case is charting ldap attributes (I generate the data with a script so I can control the shape of it) Want to get everyone away from spreadsheets....