All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm experimenting with the Add-on for Cisco UCS. After configuring server, tasks and templates i get the following message in the ta_app_conf.log: Splunk_TA_cisco-ucs:Manager server is refer... See more...
Hi, I'm experimenting with the Add-on for Cisco UCS. After configuring server, tasks and templates i get the following message in the ta_app_conf.log: Splunk_TA_cisco-ucs:Manager server is referenced by tasks.conf, but it is disabled or disabled in servers.conf Here's the cisco_ucs_tasks.conf: [CISCO_UCS_TEST] disabled = 0 index = main interval = 300 servers = Splunk_TA_cisco-ucs:Manager sourcetype = cisco:ucs templates = Splunk_TA_cisco-ucs:Basis_Lab_U1_01 And the cisco_ucs_servers.conf: [Manager] account_name = ****** account_password = ****** disable_ssl_verification = True server_url = bla.bla.bla.bla I don't see why the server is disabled. Could someone give me a hint? Regards Jens,
There were a few orphaned searches that were linked to a user that had left.  I had reassigned these (knowledge objects) to a current user, but the alerts still appear on the message board.  Is there... See more...
There were a few orphaned searches that were linked to a user that had left.  I had reassigned these (knowledge objects) to a current user, but the alerts still appear on the message board.  Is there somewhere else where I need to amend something so that these alerts disappear?
Hi, I have a table like this : 3 column minimum : The problem is that I want a table which is showing results in separate lines like :  name  targetUrl time loading... htthps://... 9... See more...
Hi, I have a table like this : 3 column minimum : The problem is that I want a table which is showing results in separate lines like :  name  targetUrl time loading... htthps://... 942 /servicedesk/.../user/... htthps://... 5194 /servicedesk/customer/... htthps://... 1447   I tried a stats commands like :  |stats count by name targetUrl Time  but the result gave me that :  name  targetUrl time loading... htthps://... 1447 /servicedesk/.../user/... htthps://... 1447 /servicedesk/customer/... htthps://... 1447   Can you help me please ?
Hi There is any option to get a list of acceleration data model and what rules / reports / queries) using each of the acceleration  datamodels ?  (need that in API query) thanks!
hi, i configure my index like this : # volume definitions [volume:hotwarm_cold] path = /mnt/fast_disk maxVolumeDataSizeMB = 5976884 # index definition (calculation is based on a single index) ... See more...
hi, i configure my index like this : # volume definitions [volume:hotwarm_cold] path = /mnt/fast_disk maxVolumeDataSizeMB = 5976884 # index definition (calculation is based on a single index) [main] homePath = volume:hotwarm_cold/defaultdb/db coldPath = volume:hotwarm_cold/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb homePath.maxDataSizeMB = 768000 coldPath.maxDataSizeMB = 2304000 maxWarmDBCount = 4294967295 frozenTimePeriodInSecs = 10368000 maxDataSize = auto_high_volume coldToFrozenDir = /mnt/fast_disk/defaultdb/frozendb   but in index management i see Max Size of the Entire Index: 500000 what does Max Size of Entire Index do? and i configure my hot/warm size to 750gb, what happens in my index reach Max Size of Entire Index value?   the second question is what does Max Size of Hot/Warm/Cold Bucket do? and what is the difference between auto and auto_high_volume?   best regards
Currently we are trying use DLTK to implement our ML scenarios,but there are some questions that need to be fixed. 1. When create users and assign permissions, I had to add "Power"  to users to make... See more...
Currently we are trying use DLTK to implement our ML scenarios,but there are some questions that need to be fixed. 1. When create users and assign permissions, I had to add "Power"  to users to make them can using the "fit" command, but they still cannot access Containers, the error message is " HTTP 403 Forbidden -- You do not have the capability: admin_all_objects " How can I make non-admin users can start their own containers ,and make sure every user's data is totally separated? 2. If we use Docker as Dev and Kubernetes as Production, how can I build the workflow that a user can easily deploy code in these 2 envs, do you have a more detailed manual? thanks.
I have a situation with Splunk and how things are configured. We have multiple indexers and they have local SSD drives configured or the hot/warm mount point and we use our FLASH SAN attached over Fi... See more...
I have a situation with Splunk and how things are configured. We have multiple indexers and they have local SSD drives configured or the hot/warm mount point and we use our FLASH SAN attached over FibreChannel for the cold mount points. We need to expand the hot mount point and this will require purchasing a fair amount of equipment to expand the SSD that is installed. We could easily put it on the SAN and move the SSD to the SAN. What are the recommendations? Can we use SAN instead of SSD? Here are the bonnie++ and fio outputs for both the SAN and SSD. nohup /usr/local/bin/bonnie++ -d /ssd/splunk -x 1 -u root:root -q -f > /ssd_disk_io_test.csv 2> /ssd_disk_io_test.err < /dev/null   FLASH SAN:   Sequential Output Sequential Input Random Seeks   Sequential Create Random Create   Size:Chunk Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete   K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU / sec % CPU   / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU SERVER 505G     528475 99 252234 75     539415 83 11333.2 69 16 12646 99 +++++ +++ +++++ +++ 14164 99 +++++ +++ +++++ +++   SSD:   Sequential Output Sequential Input Random Seeks   Sequential Create Random Create   Size:Chunk Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete   K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU / sec % CPU   / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU SERVER 505G     505646 99 218685 62     504329 74 +++++ +++ 16 12085 99 +++++ +++ +++++ +++ 12162 99 +++++ +++ +++++ +++   nohup /usr/local/bin/bonnie++ -d /ssd/splunk -s 516696 -u root:root -fb > /ssd_bonnie-seth.csv 2> /ssd_bonnie-seth.err < /dev/null NON-SSD:   Sequential Output Sequential Input Random Seeks   Sequential Create Random Create   Size:Chunk Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete   K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU / sec % CPU   / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU SERVER 516696M     509275 97 241009 74     538182 81 7549.2 57 16 616 17 +++++ +++ 780 6 556 12 +++++ +++ 884 7   SSD:   Sequential Output Sequential Input Random Seeks   Sequential Create Random Create   Size:Chunk Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete   K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU / sec % CPU   / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU SERVER 516696M     501271 98 218604 62     523329 75 8428.1 60 16 501 13 +++++ +++ 570 4 499 15 +++++ +++ 579 4   Here is the output of FIO for SSD: 4k_benchmark: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 ... fio-3.5 Starting 12 processes 4k_benchmark: Laying out IO file (1 file / 102400MiB) 4k_benchmark: (groupid=0, jobs=12): err= 0: pid=29545: Thu May 3 00:53:33 2018 read: IOPS=1307, BW=1308MiB/s (1371MB/s)(38.4GiB/30063msec) slat (usec): min=178, max=445349, avg=9130.87, stdev=15882.69 clat (msec): min=17, max=2401, avg=1139.06, stdev=330.41 lat (msec): min=25, max=2420, avg=1148.20, stdev=331.67 clat percentiles (msec): | 1.00th=[ 292], 5.00th=[ 592], 10.00th=[ 718], 20.00th=[ 869], | 30.00th=[ 978], 40.00th=[ 1070], 50.00th=[ 1150], 60.00th=[ 1217], | 70.00th=[ 1318], 80.00th=[ 1418], 90.00th=[ 1552], 95.00th=[ 1670], | 99.00th=[ 1905], 99.50th=[ 2022], 99.90th=[ 2232], 99.95th=[ 2299], | 99.99th=[ 2366] bw ( KiB/s): min= 4096, max=329728, per=8.27%, avg=110678.16, stdev=45552.32, samples=700 iops : min= 4, max= 322, avg=107.93, stdev=44.42, samples=700 lat (msec) : 20=0.01%, 50=0.03%, 100=0.20%, 250=0.62%, 500=2.10% lat (msec) : 750=8.87%, 1000=20.50% cpu : usr=0.20%, sys=7.40%, ctx=53123, majf=0, minf=1233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwts: total=39314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=128 Run status group 0 (all jobs): READ: bw=1308MiB/s (1371MB/s), 1308MiB/s-1308MiB/s (1371MB/s-1371MB/s), io=38.4GiB (41.2GB), run=30063-30063msec Disk stats (read/write): dm-10: ios=78343/9, merge=0/0, ticks=4005205/515, in_queue=4059709, util=99.69%, aggrios=78628/7, aggrmerge=0/3, aggrticks=4012840/24755, aggrin_queue=4037190, aggrutil=99.66% sdb: ios=78628/7, merge=0/3, ticks=4012840/24755, in_queue=4037190, util=99.66%   Here is the output for fio for FLASH SAN: 4k_benchmark: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 ... fio-3.5 Starting 12 processes 4k_benchmark: Laying out IO file (1 file / 102400MiB) 4k_benchmark: (groupid=0, jobs=12): err= 0: pid=54555: Wed Apr 18 10:54:55 2018 read: IOPS=2462, BW=2463MiB/s (2582MB/s)(72.2GiB/30007msec) slat (usec): min=243, max=178818, avg=4850.89, stdev=9427.35 clat (usec): min=623, max=1639.7k, avg=604472.15, stdev=172064.11 lat (usec): min=1986, max=1751.5k, avg=609326.97, stdev=173108.88 clat percentiles (msec): | 1.00th=[ 239], 5.00th=[ 393], 10.00th=[ 435], 20.00th=[ 481], | 30.00th=[ 514], 40.00th=[ 550], 50.00th=[ 575], 60.00th=[ 609], | 70.00th=[ 651], 80.00th=[ 709], 90.00th=[ 818], 95.00th=[ 969], | 99.00th=[ 1150], 99.50th=[ 1200], 99.90th=[ 1318], 99.95th=[ 1334], | 99.99th=[ 1401] bw ( KiB/s): min= 4104, max=416193, per=8.42%, avg=212218.39, stdev=58840.35, samples=701 iops : min= 4, max= 406, avg=206.70, stdev=57.47, samples=701 lat (usec) : 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.04%, 50=0.11% lat (msec) : 100=0.19%, 250=0.70%, 500=24.14%, 750=60.18%, 1000=10.62% cpu : usr=0.27%, sys=15.52%, ctx=101062, majf=0, minf=1234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwts: total=73895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=128 Run status group 0 (all jobs): READ: bw=2463MiB/s (2582MB/s), 2463MiB/s-2463MiB/s (2582MB/s-2582MB/s), io=72.2GiB (77.5GB), run=30007-30007msec Disk stats (read/write): dm-11: ios=146961/5, merge=0/0, ticks=276683/6, in_queue=277077, util=99.19%, aggrios=73895/3, aggrmerge=0/0, aggrticks=138910/3, aggrin_queue=139108, aggrutil=99.17% dm-2: ios=0/4, merge=0/0, ticks=0/2, in_queue=2, util=0.01%, aggrios=0/3, aggrmerge=0/1, aggrticks=0/2, aggrin_queue=2, aggrutil=0.01% dm-0: ios=0/3, merge=0/1, ticks=0/2, in_queue=2, util=0.01%, aggrios=0/2, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00% sdc: ios=0/2, merge=0/0, ticks=0/0, in_queue=0, util=0.00% sdf: ios=0/2, merge=0/0, ticks=0/1, in_queue=1, util=0.00% dm-3: ios=147790/3, merge=0/0, ticks=277820/4, in_queue=278214, util=99.17%, aggrios=147790/3, aggrmerge=0/0, aggrticks=279775/4, aggrin_queue=279008, aggrutil=99.11% dm-1: ios=147790/3, merge=0/0, ticks=279775/4, in_queue=279008, util=99.11%, aggrios=73895/1, aggrmerge=0/0, aggrticks=129019/0, aggrin_queue=128825, aggrutil=98.68% sdd: ios=73894/2, merge=0/0, ticks=125568/0, in_queue=125371, util=98.66% sdg: ios=73896/1, merge=0/0, ticks=132470/0, in_queue=132279, util=98.68%  
Splunk Cloud: v7.2.9 SSE: v3.2.0 In SSE, under Analytics Advisor > MITRE ATT&CK Framework > Available Content > MITRE Att&CK Matrix I am getting the error: Error in 'eval' command: The 'mvmap' f... See more...
Splunk Cloud: v7.2.9 SSE: v3.2.0 In SSE, under Analytics Advisor > MITRE ATT&CK Framework > Available Content > MITRE Att&CK Matrix I am getting the error: Error in 'eval' command: The 'mvmap' function is unsupported or undefined.     SPL for search     | mitremap popular_only="$show_popular_techniques$" content_available=$show_content_available$ groups="$threat_group$" platforms="$mitre_platforms$" | foreach * [ | rex field="<<FIELD>>" "(?<technique_temp>.*) \(" | eval Technique_nogroups=coalesce(technique_temp,'<<FIELD>>') | eval "<<FIELD>> Tactic" = "<<FIELD>>" | eval Matrix="Enterprise ATT&CK" | eval "Sub-Technique"="-" | lookup mitre_environment_count.csv Matrix "Sub-Technique" Technique AS Technique_nogroups, "Tactic" AS "<<FIELD>> Tactic" OUTPUT "Active" "Available" "Needs data" "Data Source" | eval "Data Source"=split('Data Source',",") | eval "Data Source"=mvfilter($data_sources_selected_filter$) | rex field="Data Source" "(?<Data_Source>[^:]*)::(?<Data_Source_Count>.*)" | rename Data_Source AS "Data Source" | eval Selected=if(in('Data Source',$datasource_selection$) , Data_Source_Count,0) | eval Selected=tonumber(coalesce(mvindex(Selected,0,0),0))+tonumber(coalesce(mvindex(Selected,1,1),0))+tonumber(coalesce(mvindex(Selected,2,2),0))+tonumber(coalesce(mvindex(Selected,3,3),0))+tonumber(coalesce(mvindex(Selected,4,4),0))+tonumber(coalesce(mvindex(Selected,5,5),0))+tonumber(coalesce(mvindex(Selected,6,6),0)) | fields - Technique_nogroups technique_temp "Sub-Technique" | eval count = coalesce(count, 1), temp = "t" + count, {temp}='<<FIELD>>', color="#00A9F8", colorby="$colorby$" | eval text='<<FIELD>>' | eval p0_count=coalesce(Active,0) | eval p1_count=coalesce(Available,0) | eval p2_count=coalesce('Needs data',0) | eval p3_count=coalesce('Selected',0) | eval total_count=p0_count+p1_count+p2_count | eval opacity=tostring(case( colorby="Active",p0_count/20, colorby="Available",p1_count/20, colorby="Needs data",p2_count/20, colorby="Total",total_count/20 )) | eval tooltip="Active: ".p0_count."<br />"."Available: ".p1_count."<br />"."Needs data: ".p2_count."<br />"."Total: ".total_count."<br />"."Selected: ".p3_count | eval "<<FIELD>>_Groups"=rtrim(mvindex(split('text', " ("),1),")") | eval "Technique"=mvindex(split('text', " ("),0) | lookup mitre_matrix_list.csv Matrix Tactic AS "<<FIELD>> Tactic" Technique OUTPUT TechniqueId AS "<<FIELD>>"_TechniqueId | eval IsSubTechnique="Yes" | lookup mitre_environment_count.csv Matrix Tactic AS "<<FIELD>> Tactic" Technique IsSubTechnique OUTPUT "Sub-Technique" Active AS Active_SubTechnique Available AS Available_SubTechnique "Needs data" AS "Needs Data_SubTechnique" Sub_Technique_Total AS Total_SubTechnique | eval Opacity_SubTechnique=case( colorby="Active", mvmap(Active_SubTechnique,Active_SubTechnique/10), colorby="Available", mvmap(Available_SubTechnique,Available_SubTechnique/10), colorby="Needs data", mvmap('Needs Data_SubTechnique','Needs Data_SubTechnique'/10), colorby="Total", mvmap(Total_SubTechnique,Total_SubTechnique/10) ) | eval Color_SubTechnique=mvmap(Active_SubTechnique,'color') | eval Active_SubTechniqueJson=mvmap(Active_SubTechnique,"\"Active\": ".Active_SubTechnique),Available_SubTechniqueJson=mvmap(Available_SubTechnique,"\"Available\": ".Available_SubTechnique),NeedsData_SubTechniqueJson=mvmap('Needs Data_SubTechnique',"\"Needs Data\": ".'Needs Data_SubTechnique'),Total_SubTechniqueJson=mvmap('Total_SubTechnique',"\"Total\": ".'Total_SubTechnique'),Color_SubTechniqueJson=mvmap('Color_SubTechnique',"\"Color\": \"".'Color_SubTechnique'."\""),Opacity_SubTechniqueJson=mvmap('Opacity_SubTechnique',"\"Opacity\": ".'Opacity_SubTechnique') | eval SubTechniqueValuesMerge=mvzip(Active_SubTechniqueJson,mvzip(Available_SubTechniqueJson,mvzip(NeedsData_SubTechniqueJson,mvzip(Color_SubTechniqueJson,mvzip(Opacity_SubTechniqueJson,Total_SubTechniqueJson))))) | eval Sub_Technique=coalesce(",\"Sub_Techniques\": {".mvjoin(mvzip(mvmap('Sub-Technique',"\"".'Sub-Technique'."\""),mvmap(SubTechniqueValuesMerge, "{".SubTechniqueValuesMerge."}"),": "),",")."}","") | fields - *_SubTechniqueJson Active_SubTechnique Available_SubTechnique "Needs Data_SubTechnique" "Sub-Technique" IsSubTechnique SubTechniqueValuesMerge *_SubTechnique | eval "<<FIELD>>_TechniqueId"=mvdedup('<<FIELD>>_TechniqueId') | eval "<<FIELD>>" = if(text!="",mvappend("TechniqueId: ".'<<FIELD>>_TechniqueId',"Technique: ".Technique,"Color: ".color,"Opacity: ".opacity,"Active: ".p0_count,"Available: ".p1_count,"Needs data: ".p2_count,"Total: ".total_count,"Selected: ".p3_count,"Groups: ".'<<FIELD>>_Groups'),null) | eval "<<FIELD>>"="{".mvjoin(mvmap('<<FIELD>>',"\"".mvindex(split('<<FIELD>>',": "),0)."\": \"".mvindex(split('<<FIELD>>',": "),1)."\""),",").Sub_Technique."}" | eval count = count + 1 ] | fields - temp count Active Available "Needs data" tooltip *Tactic color colorby opacity p0* p1* p2* t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 t16 t17 t18 text total_count "Data Source" Data_Source_Count Selected p3_count Matrix *_TechniqueId *_Groups Technique "Sub-Technique" Sub_Technique    
Hi  I am trying to extract field from the user agent details like ( Operating system, Software, Software version, Software type, Os version, Hardware type)  However i am finding some difficulty ext... See more...
Hi  I am trying to extract field from the user agent details like ( Operating system, Software, Software version, Software type, Os version, Hardware type)  However i am finding some difficulty extracting the field . For example Operation system in Android, IOS & desktop are in the different field which highlighted below.  Android user - Mozilla/5.0 (Linux; Android 10; SAMSUNG SM-T590) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser / 12.1 Chrome/79.0.3945.136 Safari/537.36     Iphone user - Mozilla/5.0 (iPhone; CPU iPhone OS 14_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Mobile/15E148 Safari/604.1 Desktop user - Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36 can someone help me how do extract field from the above user agent  Software, Software version, Hardware type, Operation System,  Operating system name , Operation system version.    Thanks   
Sending json events and log events to HEC in using Java Logback, and Log4j matches the format found in application log files. How do I send JSON data to Splunk hec and not see the fields logger, mes... See more...
Sending json events and log events to HEC in using Java Logback, and Log4j matches the format found in application log files. How do I send JSON data to Splunk hec and not see the fields logger, message, severity, thread, time in there? For example, if my json object is {id:1234, type:issue, {field1:Val1, field2:Val2}}, how do i get it into Splunk HEC without seeing it in my index as     { logger:SPLUNK, severity:INFO, thread:main, time:160486996.996, message:{id:1234, type:issue, {field1:Val1, field2:Val2}} }      
Is it possible to ingest logs in Splunk using inputs.conf file from a local machine where Splunk Enterprise is installed? 
Hi, I have a dozen of UFs that are restarting every ten minutes. They are on Windows. Running 7.2 (latest supported version). What I have checked so far: - Splunk excluded from antivirus - disabl... See more...
Hi, I have a dozen of UFs that are restarting every ten minutes. They are on Windows. Running 7.2 (latest supported version). What I have checked so far: - Splunk excluded from antivirus - disabled deploymentclient - UF running as local system Any ideas what could trigger a restart after disabling deploymentclient.conf?  
We received the below error in splunkd.log on our indexer server. We are using cluster env with 6 indexers. The indexers are coming up and down WARN IndexerService - Indexer was started dirty: splun... See more...
We received the below error in splunkd.log on our indexer server. We are using cluster env with 6 indexers. The indexers are coming up and down WARN IndexerService - Indexer was started dirty: splunkd startup may take longer than usual; searches may not be accurate until background fsck completes. 11-09-2020 00:10:41.703 +0000 WARN IndexConfig - Max bucket size is larger than destination path size limit. Please check your index configuration. idx=some_index; bucket size in (from maxDataSize) 750 MB, homePath.maxDataSizeMB=256, coldPath.maxDataSizeMB=0
Hi, I'm going to tear down an old separate Splunk environment to consolidate on 1 platform. The main platform is using SmartStore and the instances are ephemeral, so disks will disappear when they ... See more...
Hi, I'm going to tear down an old separate Splunk environment to consolidate on 1 platform. The main platform is using SmartStore and the instances are ephemeral, so disks will disappear when they rebuild (monthly for patching etc). Is it possible to move the existing data from the separate splunk environment over to the smartstore s3 bucket and add the index definitions to the indexes.conf and it automatically pick up the old data?
Hi, This is the case scenario: when I run this search query: index = "global" productID I get the following result: { "productID" : "12",    "UserID" : "123_username",    "type" : "web_based" ... See more...
Hi, This is the case scenario: when I run this search query: index = "global" productID I get the following result: { "productID" : "12",    "UserID" : "123_username",    "type" : "web_based" }, .... I get 100s of these result with various productID's. Either same productID or different productID but I get the result that has productID in the entire index = "global" now the above query result have 2 key parameters that is of my interests. They are: productID UserID Now, I have 2 additional lookup table. They are: Employee_lookup {username, DepartmentName, .... } in the above query result: username == UserID Product_lookup {ProductID, ProductName, ... } in the above query result: ProductID == ProductID   now my goal is to have one table that contains all the data, and my end result table will result with these columns:   DepartmentName,     ProductID,     ProductName,     UserID ---------------------------------------------------------------------------- Sales,                                 12,                   marketing,             123_username Business,                          12,                   marketing,              323_username Business,                          15,                   Online,                      523_username   Note that the ProductID and Product name are always same. All we are doing is fetching the ProductID, corresponding ProductName matching the ProductID, and then matching the UserID == Username, and related DepartmentName for the UserID... basically, I want to search all the ProductID along with its department name, product name, and userID How do I create a query for such end result? Could someone please help me?
Hi Community, This is a continuation from another post (https://community.splunk.com/t5/Splunk-Search/Line-Chart-Overlay-based-on-previous-month-and-previous-month-1/td-p/525506' ) but there are som... See more...
Hi Community, This is a continuation from another post (https://community.splunk.com/t5/Splunk-Search/Line-Chart-Overlay-based-on-previous-month-and-previous-month-1/td-p/525506' ) but there are some key changes to the requirements, which is why I created a new post here. I would like to have a search that allows me to compare (overlay) discrete data from 2 different time period (could be a full month /day, etc.) based on the time picker selection.    Sample of requirement (the time period should be selected based on 2 separate timepickers):     Here is the solution from the previous post which compares between the previous month and the month before that.   -- base query --setting earliest as -2mon@mon and latest as -0mon@mon | eval month=strftime(_time,"%m") | stats count as ABC by month condition1 condition2 | eval EFG=round(ABC/1000,3) | stats sum(EFG) as XYZ by condition1 month | xyseries condition1 month XYZ     Question 1: How can I change the query to allow the time picker's token (a total of 2 separate time periods) to be passed it ("eval month=..." portion?) to eventually plot it in the xyseries? Question 2: How do I set the time picker token to the time(or period selected in timepicker) and use it in the dashboard panel's title?          ... ... <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-2mon@mon</earliest> <latest>@mon</latest> </default> </input> </fieldset> <init> <eval token="monthofperiod1">strftime(relative_time(time($field1$), "???"), "%b")</eval> </init> <row> <panel> <title>Sample Period: $monthofperiod1$</title> ... ...     Thanks in advance!  
Hello folks; Completely new to SPLUNK. I am trying to get a table of the 10 stores for each State for the current week (as Time) but have no clue how to write it. I would like to see average Sales... See more...
Hello folks; Completely new to SPLUNK. I am trying to get a table of the 10 stores for each State for the current week (as Time) but have no clue how to write it. I would like to see average Sales for each of those 10 stores, for each State, sorted -Avg_Sales index = Store_Index... Fields I guess would be State, Store, Time, Avg_Sales   I would appreciate any ideas you may have. Thanks so much for your time. Smiddy
I've tried using props.conf.spec and transforms.conf.spec and some regex to extract a value from a logfile in order to use it as my hostname value. I see that with my regex I can extract the given va... See more...
I've tried using props.conf.spec and transforms.conf.spec and some regex to extract a value from a logfile in order to use it as my hostname value. I see that with my regex I can extract the given value but I have two problems:    1.) when I use the gui to get data in I can only choose a given value for hostname pre indexing or use regex only for the path in which my logfile lies. When I put in my tested regex in the hostname field it ofc doesn't work.  So I guess I first have to set up the sourcetype in props.conf and configure the extraction in transforms.conf 2.) I can't seem to find an explanation on how to configure the extraction correctly. like I said the regex seems okay but in transforms I seem to need the following fields which I don't know how to use: SOURCE_KEY DEST_KEY FORMAT   In the Logfile it looks similar to that  (the host value is "DC1ASM1.dc1.greendotcorp.com"):   Sep 20 11:13:36 10.50.3.100 Sep 20 11:13:33 DC1ASM1.dc1.greendotcorp.com ASM:"MONEYPAK_WEBAPP","MONEYPAK_CLASS","Blocked","Attack signature detected","4523972057501657341","207.154.35.240","GET /Content/Images/img_logo04_module02.gif HTTP/1.1\r\nHost:...   mostly it's this host name. and I want to extract  and use it as hostname at indextime.  this is what I did so far: props.conf:   [f5asm] BREAK_ONLY_BEFORE = \w+ \d+ \d+:\d+:\d+ \d+ BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = LINE_BREAKER = \w+ \d+ \d+:\d+:\d+ \d+ NO_BINARY_CHECK = true TIME_FORMAT = %b %d %H:%M:%S TIME_PREFIX = \d+.\d+.\d+.\d+ category = Custom disabled = false pulldown_type = true TRANSFORMS-hostname = changehost   transforms.conf   [changehost] DEST_KEY = MetaData:Host SOURCE_KEY = MetaData:Host REGEX = ([a-zA-Z0-9]([a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]{2,6} +?(?=ASM)FORMAT = host::$1   I'm fairly certain that I have to change up something in transforms.conf but I can't seem to find an answer. any ideas how to set up the FORMAT, DEST_KEY and SOURCE_KEY correctly in that case? 
I have a below table which shows status of package in each host. Normally 2 kinds of packages are there, one with 'bw' word in it and other without 'bw'. In this case, i only care about 'bw' package'... See more...
I have a below table which shows status of package in each host. Normally 2 kinds of packages are there, one with 'bw' word in it and other without 'bw'. In this case, i only care about 'bw' package'. If my 'bw' package status is 'Successful' anywhere, i just want to ignore the other bw rows which has different status(eg. No_File). Is there any way to do this? Highlighted the unwanted rows in yellow color.   Expected Output:  
Hi Need you help with API query for getting accelerated datamodels statistics (usage and size) thanks!