All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am trying to regex out eligible with the answer field true, when i do it in the regex builder this works eligible\\":(?<eligibility_status>[^,]+) but when i do it in Splunk with adding the ... See more...
I am trying to regex out eligible with the answer field true, when i do it in the regex builder this works eligible\\":(?<eligibility_status>[^,]+) but when i do it in Splunk with adding the additional backslash to escape the quotation the query runs but the field is not there.   Name":null,"Id":null,"WaypointId":null}},"Body":{"APIServiceCall":{"ResponseStatusCode":"200","ResponsePayload":"{\"eligibilityIndicator\":[{\"service\":\"Mobile\",\"eligible\":true,\"successReasonCodes\":[],\"failureReasonCodes\":[]}]}"}}}
Location information for cyber data tends to be very inaccurate, especially if we're talking about mapping IP addresses to physical ones.  One may be able to narrow an IP address to a state or city, ... See more...
Location information for cyber data tends to be very inaccurate, especially if we're talking about mapping IP addresses to physical ones.  One may be able to narrow an IP address to a state or city, but a ZIP/postal code is too fine-grained.  If you try, you may find the postal code at the center of the city/state gets used the most because of the way iplocations are assigned much the same as how the city in the center of a state often is used for any IP addresses in that state.
Hello Everyone, I am trying to extract the unique browser name along with its count from the list of user agents(attached file) which is printed in user_agent field of splunk logs.   index=my_inde... See more...
Hello Everyone, I am trying to extract the unique browser name along with its count from the list of user agents(attached file) which is printed in user_agent field of splunk logs.   index=my_index "master" user-agent!="-" user-agent!="DIAGNOSTICS" | eval browser=case( searchmatch("*OPR*"),"Opera", searchmatch("*Edg*"),"Edge", searchmatch("*Chrome*Mobile*Safari*"),"Chrome", searchmatch("*firefox*"),"Firefox", searchmatch("*CriOS*safari"),"Safari") | stats count as page_hit by browser   I am sure the result count is incorrect as I am not covering all the combination of browser string from the attached list. Appreciate if someone can help me on this. Many Thanks
I’ve been diving deeper into using Splunk for analyzing various types of data, and recently I’ve been exploring how location-based data can provide more insightful trends. Specifically, I’ve been... See more...
I’ve been diving deeper into using Splunk for analyzing various types of data, and recently I’ve been exploring how location-based data can provide more insightful trends. Specifically, I’ve been curious about using zip codes as a meaningful filter for my searches. I’ve noticed that when I try to correlate events or patterns based on geographical areas, things get a little tricky. I’d love to hear your thoughts on how best to approach this issue or whether anyone else has encountered similar challenges. One thing I’ve realized is that Splunk offers robust tools for organizing and visualizing data, but when I’m dealing with a large dataset, like logs from multiple service locations, finding a way to cleanly incorporate zip codes as a key field for analysis feels like a unique challenge. For example, I recently wanted to track service outages and correlate them with specific zip codes. While I was able to extract the relevant fields using Splunk’s field extraction capabilities, I still felt there was a gap in how I could apply the zip code data dynamically across multiple dashboards. A zip code is a numerical identifier used by postal systems to organize and streamline the delivery of mail to specific geographic regions. In the United States, zip codes typically consist of five digits, with an optional four-digit extension for more precise location targeting. People often ask questions like "What is my zip code?" to clarify the code for their current area. Beyond its primary use in mailing, zip codes are extensively utilized in various fields such as marketing, logistics, and data analysis. In Splunk, incorporating zip codes into searches adds a powerful geographical layer that can reveal trends and patterns within datasets. What I found interesting was how zip codes can act as a lens to uncover patterns that might otherwise go unnoticed. For instance, seeing clusters of events in specific areas made me think differently about how I approach my data analysis in general. One time, I noticed a spike in certain service requests clustered within a few zip codes, and that insight led me to explore potential external factors (like weather or traffic conditions). This kind of context adds so much value, and I believe Splunk has the power to deliver it. That said, I wonder if there are specific tools or configurations within Splunk that would make this process smoother and more intuitive. If anyone has experience working with zip code data in Splunk, what are your tips for making the most of it? Are there specific apps or configurations I should look into for better results? I’d appreciate any advice or ideas.
Hi @ITWhisperer ,   I need small tweak in same query. I am trying to filter the same data but it should give only data which shouldn't have "hv_vmbus" pattern in same day    
Chart will put the columns in ascending order lexicographically. To get around this, you should use transpose, sort and transpose (back). Try something like this | bin span=1d _time aligntime=@d | s... See more...
Chart will put the columns in ascending order lexicographically. To get around this, you should use transpose, sort and transpose (back). Try something like this | bin span=1d _time aligntime=@d | stats count as myCount by _time, zbpIdentifier | chart values(myCount) over zbpIdentifier by _time limit=0 useother=f | transpose 0 column_name=date header_field=zbpIdentifier | sort 0 -date | eval date=strftime(date, "%Y %m %d") | transpose 0 column_name==zbpIdentifier header_field=date
That is the nature of the set diff command - it will tell there's a difference, but doesn't say what it is.  See https://docs.splunk.com/Documentation/Splunk/9.3.2/SearchReference/Set An alternative... See more...
That is the nature of the set diff command - it will tell there's a difference, but doesn't say what it is.  See https://docs.splunk.com/Documentation/Splunk/9.3.2/SearchReference/Set An alternative would be to count the members of each group and show those with only one member. | multisearch [ search index=db_assets sourcetype=assets_ad_users $user1$ | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | where Group!="" | eval User=$user1$ | table Group User ] [ search index=db_assets sourcetype=assets_ad_users $user2$ | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | eval User=$user2$ | where Group!="" | table Group User ] | stats values(User) as Users by Group | where mvcount(Users)=1  
Hi @DCondliffe1 , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hello All, I am trying to build a open telemetry collector for splunk_hec receiver.  I am able to get it working and route the data to a tenant based on the token value sent in.  What I wanted to ... See more...
Hello All, I am trying to build a open telemetry collector for splunk_hec receiver.  I am able to get it working and route the data to a tenant based on the token value sent in.  What I wanted to do was have a way to handle invalid tokens. Obviously I do not want to ingest traffic with an invalid token, but I would like visability into this. Is anyone aware of a way to log some sort of message to indicate that a bad token was sent in and what that token value was and log that to a specific tenant. Here is an example confiog like: - set(resource.attributes["log.source"], "otel.hec.nonprod.fm-mobile-backend-qa") where IsMatch(resource.attributes["com.splunk.hec.access_token"], "9ff3a68d-XXXX-XXXX-XXXX-XXXXXXXXXXXX") Can I do an else or a wild card value? - set(resource.attributes["log.source"], "otel.hec.nonprod.fm-mobile-backend-qa") where IsMatch(resource.attributes["com.splunk.hec.access_token"], "********-****-****-*********") Or some other way to log a message to the otel collector with info like host or ip and the token value that was sent?  I am just looking into gaining visibility into invalid token data sent. 
Hi @DCondliffe1 , probably it's an error, because there isn't any pre-built panel and any dashboard in this Add-on, also because, this is an Add-On and not an app. This is a Splunk Supported Add-On... See more...
Hi @DCondliffe1 , probably it's an error, because there isn't any pre-built panel and any dashboard in this Add-on, also because, this is an Add-On and not an app. This is a Splunk Supported Add-On s, open a case to Splunk Support for it. Ciao. Giuseppe
Please read the description above where it specifically mentions pre-built panels, there is also a utube video from Splunk showing a demo in which it demonstrates using pre-built panels.
I agree.  My last environment I managed had UF versions ranging from high 6.x to low 9.1.x.  Any upgrade readiness scans would lite up like a Christmas tree looking at the DS folder.
Hi @DCondliffe1 , where do you find that there are Pre-Built Panels in this Add-On? this is an add-on without any dashboards and any kind of interfaces, and this is confermed also viewing the folde... See more...
Hi @DCondliffe1 , where do you find that there are Pre-Built Panels in this Add-On? this is an add-on without any dashboards and any kind of interfaces, and this is confermed also viewing the folders. Ciao. Giuseppe
Hi @dorHerbesman , at first, you must edit the web-features.conf file not web.conf. Then you should try a value for each row: [feature:dashboards_csp] enable_dashboards_external_content_restrictio... See more...
Hi @dorHerbesman , at first, you must edit the web-features.conf file not web.conf. Then you should try a value for each row: [feature:dashboards_csp] enable_dashboards_external_content_restriction = true enable_dashboards_redirection_restriction = true dashboards_trusted_domain.endpoint1 = http://jenkins dashboards_trusted_domain.endpoint2 = https://jenkins as you can read at https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Web-featuresconf#web-features.conf.example Ciao. Giuseppe
I cannot find the Pre-built panels in the Splunk Add-on for Apache Web Server Version 2.1.0.
i have a problem with the mention warning on my search head:(attached photo) i tried following the guide here: Configure Dashboards Trusted Domains List - Splunk Documentation and run : curl ... See more...
i have a problem with the mention warning on my search head:(attached photo) i tried following the guide here: Configure Dashboards Trusted Domains List - Splunk Documentation and run : curl -k -u admin:$password$ https://mysplunk.com:8000/servicesNS/nobody/system/web-features/feature:dashboards_csp -d dashboards_trusted_domain.exampleLabel=http://jenkins/ and got:  curl: (56) Received HTTP code 403 from proxy after CONNECT i tried running it on the splunk master and on some of the search heads and it didn't work. also tried  editting : /etc/system/local/web.conf with: [settings] dashboards_trusted_domains = http://jenkins https://jenkins and still the same error   what am i doing wrong? thanks in advanced to helpers!
Good day, I am trying to get a dashboard up and going to easily find the difference between two users groups. I get my information pulled from AD into splunk and then if user1 has a group that use... See more...
Good day, I am trying to get a dashboard up and going to easily find the difference between two users groups. I get my information pulled from AD into splunk and then if user1 has a group that user2 doesnt have then I can easily compare two users to see what is missing. Example users in the same department typically require the same access but one might have more privileges and that is what I want to see. So my search works fine, only problem is it only gives me the group difference and now I cant see who has that group in order to add it to the user that doesnt have the group. I want to add the user next to the group: example group user G-Google user1 G-Splunk user2 | set diff [ search index=db_assets sourcetype=assets_ad_users $user1$ | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | where Group!="" | table Group ] [ search index=db_assets sourcetype=assets_ad_users $user2$ | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | where Group!="" | table Group ]  
Dear experts My search index="abc" search_name="xyz" Umgebung="prod" earliest=-7d@d latest=@d zbpIdentifier IN (454-594, 256-14455, 453-12232) | bin span=1d _time aligntime=@d | stats count as my... See more...
Dear experts My search index="abc" search_name="xyz" Umgebung="prod" earliest=-7d@d latest=@d zbpIdentifier IN (454-594, 256-14455, 453-12232) | bin span=1d _time aligntime=@d | stats count as myCount by _time, zbpIdentifier | eval _time=strftime(_time,"%Y %m %d") | chart values(myCount) over zbpIdentifier by _time limit=0 useother=f produces the following chart:   For each zbpIdentifier I have a group within the graph showing the number of messages during several days.  How to change the order of the day values within the group? Green (yesterday) should be the most left, followed by pink (the day before yesterday) and orange, ..... | reverse will change the order of the whole groups, that's not what I need.  All kind of time sorting like  | sort +"_time" or | sort -"_time" before and after  | chart ...  does not change anything.