Hi Splunkers! Good day! I would like to add event and detection fields in stats command, after adding in stats command, I'm not getting the expected results. I need that fields as well but I ...
See more...
Hi Splunkers! Good day! I would like to add event and detection fields in stats command, after adding in stats command, I'm not getting the expected results. I need that fields as well but I should get the expected results, Old command | stats count as num by name country state scope Modified command | stats count as num by name country state scope event description - giving me wrong results. Thanks in Advance! Manoj Kumar S
Good morning community wanted to know if it is possible to install the splunk in solaris 11.4, if correct could provide me with the necessary commands to perform the installation I appreciate it (I'm...
See more...
Good morning community wanted to know if it is possible to install the splunk in solaris 11.4, if correct could provide me with the necessary commands to perform the installation I appreciate it (I'm newbie)
Buenos días comunidad queria saber si es posible instalar el splunk en solaris 11.4, si correcto me podria proporcionar los comandos necesarios para realizar la instalacion se lo agradezco (soy novato)
UPDATE: I just edited the datamodel and included the related indexes. Waiting for the index now to be recalculated. I will let you know if that solves the issue.
@VatsalJagani thank you for the advise.. Not sure if the use of stats command is correct. I would need the append output to show the search head captain, bundle size and file name. | rest /service...
See more...
@VatsalJagani thank you for the advise.. Not sure if the use of stats command is correct. I would need the append output to show the search head captain, bundle size and file name. | rest /services/shcluster/status splunk_server=local | fields captain.label | append [ | rest splunk_server=local /services/search/distributed/bundle-replication-files | fields captain.label size filename | eval timestamp=strftime(timestamp,"%m/%d/%y %H:%M:%S") | eval size=size/1024/1024/1024 | table filename timestamp size ] | rest /services/shcluster/status splunk_server=local | fields captain.label | append [ | rest splunk_server=local /services/search/distributed/bundle-replication-files | fields captain.label size filename | eval timestamp=strftime(timestamp,"%m/%d/%y %H:%M:%S") | eval size=size/1024/1024/1024 | table filename timestamp size ] | stats latest(_time) as latest_time by captain.label size filename | convert ctime(latest_time)
thanks @_JP . I checked that link and what I saw is that if I search for eventtype=pan I get 0 results but if I include index=* eventtype=pan, then I get thousands of events. So I can imagine tha...
See more...
thanks @_JP . I checked that link and what I saw is that if I search for eventtype=pan I get 0 results but if I include index=* eventtype=pan, then I get thousands of events. So I can imagine that the Palolato app does not include the indexes in its searches. Do you know if I should include the indexes to use some where in the local folder? or maybe there is a setting in splunk to user index=* by default in any search that does not include the index clause? cheers
Hai Team/ @Ryan.Paredez I have developed .NET sample MSMQ sender and receiver standalone application. I have tried Instrumenting that application. I could load the profiler and was able to see th...
See more...
Hai Team/ @Ryan.Paredez I have developed .NET sample MSMQ sender and receiver standalone application. I have tried Instrumenting that application. I could load the profiler and was able to see the MQ Details and transaction snapshots for sender application, but was unable to get MQ details for receiver application in AppDynamics controller. But we are expecting MSMQ Entry point for .NET consumer application. I have tried resolving the metrics issue by adding Message Queue entry points which AppDynamics has been mentioned in the below link, https://docs.appdynamics.com/appd/21.x/21.7/en/application-monitoring/configure-instrumentation/transaction-detection-rules/message-queue-entry-points Please look into this issue and help us to resolve this. Thanks in advance.
Hi Gcusello So could you confirm me that I can add a trendline in a table panel like this (I can test it for the moment...) | timechart span=12h count ,values(sourcetype)
| rename values(sourcetype...
See more...
Hi Gcusello So could you confirm me that I can add a trendline in a table panel like this (I can test it for the moment...) | timechart span=12h count ,values(sourcetype)
| rename values(sourcetype) as sourcetype
| trendline sma10(count) as trend
| table _time count trend sourcetype Thanks
I do agree with you and that was a first practical choice I was thinking of. There is a risk I may be over engineering an answer, but the goal would be to get a simple view of "where the gaps are"....
See more...
I do agree with you and that was a first practical choice I was thinking of. There is a risk I may be over engineering an answer, but the goal would be to get a simple view of "where the gaps are". I find the more elaborate visualisations particularly helpful as they allow me to take dry technical topics and present to non technical people, who often control budgets, and I can automate the creation of data and feed into existing automated workflows, alerting, ticketing and so on; which for me results in the approach running itself so I can move on to something else. This is really the my goal in practice. Really appreciate the quick feedback - starting simple sound very sensible!
It depends on what your goal is. Because an app as such is actually a bunch of files stored together for: 1) Ease of management (so that you can easily deploy them as a single ap or remove as one; ...
See more...
It depends on what your goal is. Because an app as such is actually a bunch of files stored together for: 1) Ease of management (so that you can easily deploy them as a single ap or remove as one; also for proper configuration file precedence) 2) Permission management (so you can grant permissions for particular apps to specific roles) But from your question I suppose you want to capture a state of the whole system in order to be able to recreate some specific reports or visualizations within a single app. It's not that easy. 1. Reports/visualizations from one app will most probably rely on stuff from other apps (extractions, calculated fields, lookups, maybe datamodels). So just one app is usually not enough on its own. 2. Of course the report/dashboard is created from the data stored at a given point in time in your Splunk system so you'd have to not only find out which data it is and properly copy it out (let's for now leave aside the topic of techincal details of copying it out), but also make sure that the data does not "age" and is not rotated to frozen and removed from the system over time on the destination system. 3. Usually reports and dashboards use searches with time ranges defined relatively to current moment (like "from a week ago to this day's beginning"). If you execute such search in two weeks time you will surely get different results simply because the search will be run against completely different set of data even though you might still have the original data in place. So it's more complicated than it seems. If you need to capture some state at this point I'd rather think about exporting the results of some reports or screenshots of your dashboards - that's static data which is guaranteed not to change over time. Otherwise you'll have to solve all those problems I mentioned earlier.
Hi @harimadambi, if you want to copy data from an index to another, you can create a scheduled search (with the frequency you prefer for updates) using the collect command: index=your_staring_index...
See more...
Hi @harimadambi, if you want to copy data from an index to another, you can create a scheduled search (with the frequency you prefer for updates) using the collect command: index=your_staring_index
| collect index=your_new_index Sincerely, it isn't so clear for me why do you want to do this: I understand the app backup, but usually indexes are under a general backup policy and there isn't the requirement of backupping only one index, but It's only a my idea, you can copy the events of one or more indexes in another one and use it for your purposes. Ciao. Giuseppe
OK. Even with version 7.x of the receivers (indexers/HFs), the 9.0 UF should be relatively OK. Just remember that 9.0 UF introduced the configtracker mechanism which produces events for the _configt...
See more...
OK. Even with version 7.x of the receivers (indexers/HFs), the 9.0 UF should be relatively OK. Just remember that 9.0 UF introduced the configtracker mechanism which produces events for the _configtracker index which is normally not available in older splunk installations so you'll end up with events either discarded (with a warning) or sent to a last-chance index.
It depends on what you are trying to visualise in your data. What insights would a chart help you find? For example, a simple column chart with x-axis as your number and y-axis as used/unused might...
See more...
It depends on what you are trying to visualise in your data. What insights would a chart help you find? For example, a simple column chart with x-axis as your number and y-axis as used/unused might be sufficient. Depending whether you are more interested in use or not you might assign the values the other way around. Also, if you mainly have use and are looking for the occasional gap, you might want unused to show up as a spike rather than a dip.
@gcuselloLuckily, Splunk is quite resourceful and can optimize some searches on its own. For example - take this search from my local splunk at home index=_internal host=backup1.local | search so...
See more...
@gcuselloLuckily, Splunk is quite resourceful and can optimize some searches on its own. For example - take this search from my local splunk at home index=_internal host=backup1.local | search source="/var/log/audit/audit.log" if you get to job details dashboard you will see this: As you can see - the chained searches have been merged into a single search which will be performed in the map phase (normally would be pushed to indexers but my environment is all-in-one in this case). I wouldn't normally rely on Splunk's ability and would try to make the search "good" anyway but it's worth knowing that chaining searches does not necessarily hurt the performance on its own. Of course if you do something in between like | search | calculate_some_fields | search from_those_fields It won't be optimized out because you still have to calculate those fields first so YMMV. So it's not that easy
@gcusello Thank you for the answer. I'm managing a multisite indexer cluster which hold many customer projects data and their visualization. I would like to create a snapshot of a particular project...
See more...
@gcusello Thank you for the answer. I'm managing a multisite indexer cluster which hold many customer projects data and their visualization. I would like to create a snapshot of a particular project for a specific date in the past and it should be kept for future. Why I'm calling it as snapshot means I need the same visualizations along with full data in another index. So my idea was to create another index and copy the data from the soruce index say demo to destination index say demo_snapshot by collect command. But there I can see some data loss. So I would like to know is there any suggestions I can get from Splunk community to achieve my target. Thank you,
There are a number of viz that may help. The Itsy Bitsy App for Splunk - https://splunkbase.splunk.com/app/5256 is an example app that shows some interesting simple XML/CSS techniques. Also, not su...
See more...
There are a number of viz that may help. The Itsy Bitsy App for Splunk - https://splunkbase.splunk.com/app/5256 is an example app that shows some interesting simple XML/CSS techniques. Also, not sure if Treemap would work for your use case https://splunkbase.splunk.com/app/3118