All Topics

Top

All Topics

Hi All, So I've created the logic below to decode base64. Other discussions on this topic give possible solutions but only work when what has been encoded is smaller in size because of use of list... See more...
Hi All, So I've created the logic below to decode base64. Other discussions on this topic give possible solutions but only work when what has been encoded is smaller in size because of use of list in their stats command. My Logic Looks like this:     | eval time=_time | appendpipe [ | eval converts=split(encoded,"") | mvexpand converts | lookup base64conversion.csv index as converts OUTPUT value as base64bin | table encoded, base64bin, time | mvcombine base64bin | eval combined=mvjoin(base64bin,"") | rex field=combined "(?<asciibin>.{8})" max_match=0 | mvexpand asciibin | lookup base64conversion.csv index as asciibin OUTPUT value as outputs | table encoded, outputs, time | mvcombine outputs | eval decoded=mvjoin(outputs,"") | table encoded, decoded, time ] | selfjoin time     And looks like this in a test environment: This is partially taken from other people's work but so some of it may be familiar to other discussions. My issue is when put into a larger search, it doesn't work for all values, especially the seemingly longer ones. I can't show it in action unfortunately but if you have a number of encoded commands to run it against it will only do it for one of them. I thought this might be because the self join for time is not entirely unique but I'm starting to think it's because I'm not using a stats command before the appendpipe to group by encoded, even when I do that though it still doesn't work. The lookup I'm using is based on the one discussed here: https://community.splunk.com/t5/Splunk-Search/base64-decoding-in-search/m-p/27572 At this point I will likely just install an app if no one can resolve this. I thought I'd ask to get other people's points of view, any help would be much appreciated.
I am trying to set up POC for Splunk indexing and the manager node is up, but runs on an HTTP link (Certificate is not there yet) instead of HTTPS. While configuring the peer when I provide the addr... See more...
I am trying to set up POC for Splunk indexing and the manager node is up, but runs on an HTTP link (Certificate is not there yet) instead of HTTPS. While configuring the peer when I provide the address of master node, I am getting the below error:   Is there a way to bypass this or create a dummy certificate for Splunk?
I am a learner who wants to create a Splunk server where I can retrieve information from a business server that I work at.  To securely protect the firm I wonder if I can connect to a server that all... See more...
I am a learner who wants to create a Splunk server where I can retrieve information from a business server that I work at.  To securely protect the firm I wonder if I can connect to a server that all of the pc's are connected to and retrieve the information going out and going in.  Thus I can supervise the information and see if there are anything out of ordinary. If possiAny help is appreciated :).
The Splunkd logs are sending me the messages listed below. Three days later, the alerts reappear once Splunkd has restarted. However, I've since made some adjustments to indexes.conf and added two at... See more...
The Splunkd logs are sending me the messages listed below. Three days later, the alerts reappear once Splunkd has restarted. However, I've since made some adjustments to indexes.conf and added two attributes. maxHotBuckets = 5 minHotIdleSecsBeforeForceRoll = auto   Please advise if both settings are sufficient to permanently remove the information messages. 11-04-2023 15:40:09.545 +0100 INFO HotBucketRoller - finished moving hot to warm bid=asr~308~34353497-7F2F-41CB-B772-DAF7007EA623 idx=abs from=hot_v1_308 to=db_1698249739_1698190953_308 size=786313216 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 11-03-2023 22:07:29.511 +0100 INFO HotBucketRoller - finished moving hot to warm bid=_internal~379~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_379 to=db_1698211695_1698040811_379 size=1048535040 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 11-01-2023 07:31:25.596 +0100 INFO HotBucketRoller - finished moving hot to warm bid=_audit~69~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_audit from=hot_v1_69 to=db_1696240764_1695536757_69 size=786419712 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-31-2023 19:58:48.033 +0100 INFO HotBucketRoller - finished moving hot to warm bid=messagebus~140~34353497-7F2F-41CB-B772-DAF7007EA623 idx=melod from=hot_v1_140 to=db_1696974841_1696841261_140 size=786358272 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-31-2023 17:23:48.700 +0100 INFO HotBucketRoller - finished moving hot to warm bid=asr~303~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_303 to=db_1697800494_1697727845_303 size=785281024 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-29-2023 00:03:30.635 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~376~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_376 to=db_1673823600_1673823600_376 size=40960 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-27-2023 12:24:16.567 +0200 INFO HotBucketRoller - finished moving hot to warm bid=messagebus~138~34353497-7F2F-41CB-B772-DAF7007EA623 idx=melod from=hot_v1_138 to=db_1696587710_1696461161_138 size=786423808 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-25-2023 07:28:42.146 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~374~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_374 to=db_1697476202_1697263512_374 size=1048510464 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-24-2023 06:36:55.716 +0200 INFO HotBucketRoller - finished moving hot to warm bid=asr~293~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_293 to=db_1697038969_1696983723_293 size=786386944 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-20-2023 13:15:13.165 +0200 INFO HotBucketRoller - finished moving hot to warm bid=asr~286~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_286 to=db_1696492029_1696421708_286 size=785948672 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-17-2023 08:50:44.494 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~373~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_373 to=db_1697263511_1697083171_373 size=1048502272 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-16-2023 19:10:28.534 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~372~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_372 to=db_1697083169_1696908238_372 size=1048461312 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-15-2023 18:10:43.940 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_introspection~230~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_introspection from=hot_v1_230 to=db_1683689783_1619379864_230 size=413696 caller=lru maxHotBuckets=3, count=3 hot buckets + 1 quar bucket,evicting_count=1 LRU hots 10-14-2023 21:26:48.653 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_audit~67~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_audit from=hot_v1_67 to=db_1694945963_1694438187_67 size=786403328 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-14-2023 08:06:09.886 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~369~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_369 to=db_1696504588_1696317607_369 size=1047363584 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-14-2023 05:02:31.677 +0200 INFO HotBucketRoller - finished moving hot to warm bid=wmc~44~34353497-7F2F-41CB-B772-DAF7007EA623 idx=www from=hot_v1_44 to=db_1695949104_1695348831_44 size=786358272 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-12-2023 05:59:51.941 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~367~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_367 to=db_1696102911_1695901400_367 size=1048420352 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-11-2023 17:43:09.179 +0200 INFO HotBucketRoller - finished moving hot to warm bid=asr~284~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_284 to=db_1696364124_1696299722_284 size=786280448 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-10-2023 23:54:56.050 +0200 INFO HotBucketRoller - finished moving hot to warm bid=messagebus~135~34353497-7F2F-41CB-B772-DAF7007EA623 idx=melod from=hot_v1_135 to=db_1696039435_1695914107_135 size=786350080 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots  
Client is asking about Splunk Cloud backup and recovery procedure for DR. Specifically all the configuration, searched, dashboards, fields, tag so on and so on. I can not find a document outlining Sp... See more...
Client is asking about Splunk Cloud backup and recovery procedure for DR. Specifically all the configuration, searched, dashboards, fields, tag so on and so on. I can not find a document outlining Splunk cloud polices for high availability, backup and restore can anyone point to this info?     Client ask -  "Could you please check and let me know how and where following items are backed up and what is the process to recover them for DR purpose? Audit logs Usecases Reports, alerts, lookup tables, KV etc Config data Source type config Parsing API, TI Fields config Data model, macros Apps and app config ES config Threat intel config"
I have some data where I want to write the values of "test_n" (n in 1,2,...20) into a multivalue field and keep the  numeric order. My attempt is to create the fields in a subsearch and pass to "mvap... See more...
I have some data where I want to write the values of "test_n" (n in 1,2,...20) into a multivalue field and keep the  numeric order. My attempt is to create the fields in a subsearch and pass to "mvapend()". This does not work.    | makeresults count=20 | streamstats count | eval test_{count}=count | stats first(test*) AS test* | eval x=mvappend([| makeresults count=20 | streamstats count AS count | eval field_names="test".count | stats list(field_names) AS field_names | nomv field_names | eval field_names=replace(field_names," ",", ") |return $field_names])    Is there any alternative to spelling out:   | eval x=mvappend(test_1,...test_20)   by hand?
Hi, We have 4 Mission Critical MQ servers that have had a more than doubling of the number of queues added that need to be monitored by the MQ Extension. This means the currently configured metrics ... See more...
Hi, We have 4 Mission Critical MQ servers that have had a more than doubling of the number of queues added that need to be monitored by the MQ Extension. This means the currently configured metrics limit of 3000 is insufficient.  We have added additional resources to all 4 servers (i.e. CPU and memory) and want to increase the agent metrics limit to ca 8-10k.  Q: What increase in agent memory do we need to safely handle this increase with at least 20-30% buffer headroom? Thanks
I have list of region in one input.dropdown based on the region selection need to populate the servers in another input.dropdown in the same glass table using search based inputs on both input.dropdo... See more...
I have list of region in one input.dropdown based on the region selection need to populate the servers in another input.dropdown in the same glass table using search based inputs on both input.dropdown.
Hi, We need to upgrade our Splunk Enterprise from version 9.0.0 to 9.0.7 on the Deployment Server. Can someone please provide me with the steps required to perform this upgrade? I also need guidanc... See more...
Hi, We need to upgrade our Splunk Enterprise from version 9.0.0 to 9.0.7 on the Deployment Server. Can someone please provide me with the steps required to perform this upgrade? I also need guidance on what needs to be backed up before executing this upgrade. Additionally, could you provide an estimation of the time required to complete this upgrade process? what about the time to complete these upgrade ?
Hi Team, I want to create a splunk dashboard with the avearge response time taken by the all the API's wich follow this condition. Example: I have below API's /api/cvraman/book /api/apj/boo... See more...
Hi Team, I want to create a splunk dashboard with the avearge response time taken by the all the API's wich follow this condition. Example: I have below API's /api/cvraman/book /api/apj/book /api/nehru/book /api/cvraman/collections /api/apj/collections /api/indira/collections /api/rahul/notes /api/rajiv/notes /api/modi/notes Now i will check for the average of the API /api/*/book,/api/*/collections,/api/*/notes. Dashboard should have only these response times in the chart /api/*/book,/api/*/collections,/api/*/notes. i tried the below query but the dashboard shows the combined average on all the three can someone please help on this index=your_index (URI = /api/*/book OR URI = /api/*/collections OR /api/*/notes. ) |stats avg(duration) as avg_time  
Hi Splunkers,   This problem is occurring on Splunk_TA_paloalto app panels. Is there someone who knows how to handle this problem? I understand it has no effect on any search, but it's still annoyi... See more...
Hi Splunkers,   This problem is occurring on Splunk_TA_paloalto app panels. Is there someone who knows how to handle this problem? I understand it has no effect on any search, but it's still annoying.   Thanks in advance.
Hi, I need to write a query to find the time remaining to consume events.   index=x message.message="Response sent" message.feedId="v1" | stats count as Produced index=y | spath RenderedMessage |... See more...
Hi, I need to write a query to find the time remaining to consume events.   index=x message.message="Response sent" message.feedId="v1" | stats count as Produced index=y | spath RenderedMessage | search RenderedMessage="*/v1/xyz*StatusCode*2*"| stats count as Processed index=z message.feedId="v1" | stats avg("message.durationMs") as AverageResponseTime     So I want to basically perform: Average Time left = Produced - Processed /AverageResponseTime How can I go about doing this? Thank you so much
I have a single search head and configured the props.conf to have DATETIME_CONFIG = CURRENT as I want the data to be indexed at the time Splunk receives the report. I restarted splunk after every cha... See more...
I have a single search head and configured the props.conf to have DATETIME_CONFIG = CURRENT as I want the data to be indexed at the time Splunk receives the report. I restarted splunk after every change. Previously I had it set to a field in the report. When I upload a csv and use the correct sourcetype it assigns the current time to the report. When I upload a report via curl through the HEC endpoint it indexes it to the right time. Same thing when I run it through a simple script. But when the test pipeline runs, it indexes data to the timestamp that is in the report even though it is using the same sourcetype as the other tests I did. Is it possible to add a time field that overrides the sourcetype config? Is there a way to see the actual api request in the splunk internal logs?
I have a drilldown into another dashboard with parameters earliest=$earliest$ and latest=$latest$, this works. When I go into the drilldown dashboard directly it sets the data to come back as "all ti... See more...
I have a drilldown into another dashboard with parameters earliest=$earliest$ and latest=$latest$, this works. When I go into the drilldown dashboard directly it sets the data to come back as "all time".  Is there a way that I can have multiple defaults or some other constrain that doesn't cause this? Here's what I've been working on but it's not working. Any feedback would be helpful... <input type="time" token="t_time"> <default> <earliest>if(isnull($url.earliest$), "-15m@m", $url.earliest$)</earliest> <latest>if(isnull($url.latest$), "now", $url.latest$)</latest> </default> </input>
good day. I am somewhat new to splunk, I am trying to generate a cross between some malicious IP s I have in a file. csv and I want to compare them with src_ip field and if there are coincidences I ... See more...
good day. I am somewhat new to splunk, I am trying to generate a cross between some malicious IP s I have in a file. csv and I want to compare them with src_ip field and if there are coincidences I throw the result, I understand that you have to generate a lookup but I can not move any further
We are happy to announce the 20 lucky questers who are selected to be the first round of Champion's Tribute winners in the Great Resilience Quest! These skilled individuals were featured on our leade... See more...
We are happy to announce the 20 lucky questers who are selected to be the first round of Champion's Tribute winners in the Great Resilience Quest! These skilled individuals were featured on our leaderboard from July 17th to October 20th, mastering challenges across the Security Saga and the Observability Chronicle. Congratulations to Our Winners! Each of you will receive a $100 Splunk Store gift card as a token of our appreciation for your dedication and resilience. You will be contact via the email you used to register for the quest. If you haven't heard from us by next week, please reach out to me via a community message. Next Announcement: Get Ready! The next round of the Champion’s Tribute will cover the period from October 21st to December 9th. There is still time to climb the ranks and make your mark on the leaderboard. Who will be the next set of winners? It could be you! Stay tuned for more updates and keep striving for the great digital resiliency! Best regards, Customer Success Marketing
Hello I have  a dashboard with few panels All the panels have drilldown tokens that updates a table also, i have few filters that supposed to update the table also but i see a message " Search is ... See more...
Hello I have  a dashboard with few panels All the panels have drilldown tokens that updates a table also, i have few filters that supposed to update the table also but i see a message " Search is waiting for input..." when selecting options from the filters.  The tables updated only when clicking on the other panels. how can i update the table from the filters ?   Thanks
How can you leverage a monitoring-as-code mechanism to initiate new workload monitoring, or to create new visualizations?       CONTENTS | Introduction | Video |Resources | About the presente... See more...
How can you leverage a monitoring-as-code mechanism to initiate new workload monitoring, or to create new visualizations?       CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 3 min 33 seconds  In this demo, see how Cisco AppDynamics can integrate with Flux CD (Continuous Delivery)—a GitOps Kubernetes operator tool that offers a simple and efficient interface to synchronize manifests within CD workflows from GitHub repositories.    See how easy it is to upgrade existing software with just a few lines of code such as when instrumenting new workloads with the OpenTelemetry Agent or customizing a Grafana dashboard.  Additional Resources  Learn more about OpenTelemetry auto-instrumentation in the documentation.  About presenter Charles Lin Charles Lin, Cisco AppDynamics Field Domain Architect Charles is a Field Domain Architect at Cisco AppDynamics. He joined Cisco as a Senior Sales Engineer in 2019. Since then, he has helped large enterprises and financial sector customers improve their monitoring practices. As a Field Domain Architect, he focuses on Cloud Native and Open Telemetry best practices and helping fellow team members overcome technical challenges. He holds multiple patents in the area of IT Monitoring and Operations and is a certified Cisco DevNet Associate.
We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, a... See more...
We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, and I grew up with using maintenance mode for the entire operation. Any thoughts? The current plan -   Event monitoring Suspend monitoring of alert on the indexers (Repeat following for each indexer, one at a time) Splunk Ops: Put Splunk Cluster in Maintenance Mode Stop Splunk service on One Indexer VM Ops vMotion the existing 2.5TB disk to any Unity datastores Provision new 2.5TB VM disk from the VSAN datastore Linux Ops Rename the existing hot data logical volume "/opt/splunk/hot-data" to "/opt/splunk/hot-data-old" Create a new volume group and mount the new 2.5TB disk as "/opt/splunk/hot-data" Splunk Ops Restart Splunk service on indexer Take Indexer Cluster out of Maintenance Mode Review Cluster Master to confirm indexer is processing and rebalancing has started as expected Wait a few minutes to allow for Splunk to rebalance across all indexers (Return to top and repeat steps for next indexer) Splunk Ops: Validate service and perform test searches Check CM Panel - -> Resources/usage/machine (bottom panel - IOWait Times) and monitor changes in IOWait Event monitoring Enable monitoring of alert on the indexers     In addition, Splunk PS suggested to use -   splunk offline --enforce-counts     Not sure if it's the right way since it might need to migrate the ~40TB of cold data, and would slow the entire operation
Are you feeling it? All the career-boosting benefits of up-skilling with Splunk? It’s not just a feeling, it's a fact according to insights from the 2023 Splunk Career Impact Survey. All your hard wo... See more...
Are you feeling it? All the career-boosting benefits of up-skilling with Splunk? It’s not just a feeling, it's a fact according to insights from the 2023 Splunk Career Impact Survey. All your hard work taking Splunk Education courses and getting Splunk Certified is helping to weather the tough economy and increase career resilience.  Mastering Splunk The report was produced from a survey of 749 Splunk practitioners across the community that asked questions about their earning power, promotability, and proficiency. The survey results confirmed the potential benefits to the employee and employer alike – highlighting that mastering Splunk is one of the best ways to fortify both enterprise and career resilience.  Future-Proofing Your Career According to the survey results, very proficient practitioners of Splunk are 2.7 times more likely to get promoted, and those with Splunk certifications plus higher levels of Splunk proficiency reported earning approximately 131% more than their less-proficient peers. Over 86% believe their company is in a stronger competitive position because of Splunk. With your Splunk skills, you're on the way to future-proofing your career!  Executive Perspective Eric Fusilero, the VP of Global Enablement and Education at Splunk, recently shared his excitement about the results and his perspective on what this means in an industry struggling to fill IT and cybersecurity roles.  “[Splunk] is incredibly powerful – and yet we all know that no matter how amazing any piece of software or new technology is, it is really only as powerful as the people who use it. It makes me feel good to know that Splunk Education is a critical piece when it comes to harnessing the true power of Splunk and the impact Splunk Training and Certification has on the careers of those who use it and thrive with it.”   Don’t Let Up on the Gas Fortify your career resilience by digging even deeper into Cloud, Security, and Observability and validating that knowledge with industry-recognized certification badges. Keep going with everything  Splunk Education has to offer.    Happy learning.  Callie Skokos on behalf of the entire Splunk Education Crew